Since commercial stereo recordings arrived in 1958, we’ve been chasing a dream: to recreate the experience of live performances in our homes. The promise was simple – two speakers could paint a three-dimensional sonic picture, placing instruments and voices in space just as we hear them in real life. Yet despite decades of advancement, something fundamental has always been missing. The culprit? The passive components between your amplifier and your ears.
To understand what’s going wrong, let’s start with something we all experience every day: vision.
We live with stereo vision, which we take completely for granted. Right now, you’re judging the distance to your screen, the depth of the room around you, and the dimensions of everything you see. This ability is essential – imagine trying to drive, pour a cup of coffee, or navigate a crowded street without it.
This three-dimensional perception works because each eye sees a slightly different image. Your brain receives these two perspectives through your optic nerves and seamlessly combines them into a single, dimensional view of the world.
Your ears work on the same principle, but they’re far more sensitive than your eyes. Evolution gifted us with this extraordinary hearing sensitivity for survival – our ancestors needed to detect predators they couldn’t see.
When a sound occurs to your left, it reaches your left ear fractionally before your right ear. This tiny time difference, combined with subtle changes in volume and tone, allows your brain to pinpoint exactly where that sound originated. Just like your eyes, your ears give you a three-dimensional map of the world.
Stereo recording mimics this natural process. Two microphones capture these same timing and volume differences. When played back through two speakers, your brain should reconstruct the original sonic space – placing the violin on the left, the cello on the right, and the piano in the centre.
In theory, this works beautifully.
The system works perfectly until the signal leaves your amplifier and enters what engineers call “the passive domain” or past the power amp into the realm of speaker cables, connectors, and most critically, the loudspeaker itself. Once your music enters this passive world, it can encounter a maze of components that fundamentally alter what you hear.
Let’s focus on the biggest culprit: the loudspeaker.
Despite manufacturers’ marketing claims, conventional loudspeakers have changed remarkably little in the past fifty years. Yes, they look more elegant, and the materials have improved, but the fundamental concept remains the same.
Here’s what happens inside a typical speaker: The signal from your amplifier is divided into separate frequency ranges—usually three (or two in smaller speakers). High frequencies go to the tweeter, mid-range frequencies to the mid-range driver, and low frequencies to the woofer. Each driver handles its own portion of the musical spectrum.
This sounds perfectly logical. In theory, it should work beautifully.
To split the signal into these different frequency ranges, speakers use a network of electronic components called a crossover. This is where the problems begin.
Manufacturers typically show you graphs demonstrating how cleanly their crossovers divide the frequency spectrum. These graphs look impressive in brochures, but they only tell you what happens to a simple test tone in a laboratory. They don’t reveal what happens to real music.

Red trace shows the ‘Perfect’ speaker response.The steeper the crossover's filtering slope, the better it protects each driver from frequencies it can't handle. This seems like a good thing, and in one sense it is. But here's what those graphs don't show you: all filters shift phase with frequency.
(For a detailed explanation on how this happens, see Appendix 1)
What does that mean in plain English? It means that some of the sound gets moved in time—it arrives at your ear when it shouldn't, displaced from where it belongs in the musical performance.
I know what you’re thinking: “I can hear the music clearly through my speakers. I can recognise Ed Sheeran, my favourite orchestra, and every instrument. What’s the problem?”
This is where our visual analogy becomes crucial—and why speaker manufacturers don’t worry too much about this issue.
Imagine looking at a photograph where all the elements are present but slightly out of alignment. You can still recognise the Eiffel Tower, still identify the people in the picture, still understand what you’re looking at. But something is subtly wrong. The spatial relationships aren’t quite right. Details that should be sharp are smeared. Depth is flattened.
You can still recognise Ed Sheeran’s voice, still identify the guitar and drums. But the harmonic structure – the subtle relationships between the fundamental notes and their overtones, the delicate timing that gives music its sense of space and realism—has been damaged. Those of you who are familiar with high-end hifi may disagree with me defending the status quo. The truth is that,t like a bad relationship until it’s gone, the new standard is impossible to imagine.
Here’s the real test: When was the last time you heard live, unamplified music? Not a rock concert with a massive PA system, but a solo trumpet, a grand piano, a double bass, or a string quartet in an intimate venue?
If you’ve had this experience recently, you’ll know there’s a palpable difference between live acoustic instruments and what comes out of even the finest conventional speakers. There’s an immediacy, a presence, a three-dimensional reality that seems to evaporate in reproduction.
What you’re hearing live is the complete, unaltered harmonic structure of the instrument—every fundamental note accompanied by its precise family of overtones, all arriving at your ears in perfect temporal alignment.
This is exactly what loudspeaker crossovers damage.
The phase shifting caused by crossovers is only part of the story. Conventional speakers compound the problem by using different materials for each driver.
A typical speaker might have a silk dome tweeter, a polymer mid-range unit, and a paper pulp bass driver. Sound travels at different speeds through each of these materials. The high frequencies produced by the silk dome arrive at your ear at a different time than the mid-range frequencies from the polymer cone, which arrive at a different time than the bass frequencies from the paper driver.
Now add another layer: none of these materials have a uniform structure at the microscopic level. Paper, for instance, paper is a mass of randomly oriented fibres. In plastic coned speakers, the long-chain polymers are far from uniform. As the driver vibrates, different parts of the cone move at slightly different times, further smearing the sound in time

Here's a telling question: Have you ever seen a professional microphone diaphragm made of paper? No, because recording engineers know that the diaphragm material must capture and transfer sound in a linear way, to capture sound accurately. Yet we accept these same non-ideal materials in the speakers that reproduce that carefully captured sound.
In 2017 John Watkins fellow of the Audio Engineering Society, published a paper in the well respected weekly journal ‘The Broadcast Bridge,” aimed at Recording Engineers and Broadcast professionals, where he completely destroyed the reputation of the dome tweeter made of any material . This device has literally graced millions of speakers over the last sixty years and continues to be used.
See our Shortcomings on Dome Tweeters
The challenge facing consumers and manufacturers is that these distortions are invisible to most listeners because they’ve never heard reproduced music without them. You can’t miss what you’ve never experienced. I salute Paul Miller of HI-FI NEWS and Record review magazine for organising a wonderful program of live music at the recent Ascot hifi show played expertly by the RPO. This problem was being talked about along with the damage the crossover does in a book called ‘Hi-Fi for Pleasure’, first published in 1956!
We recognise the music because the gross information is intact—the melody, the rhythm, the basic tone of the instruments. But the subtle three-dimensional cues, the precise harmonic relationships, and the sense of real instruments in real space have been scrambled by crossovers and compromised by material inconsistencies.
Stereo vision works because both eyes deliver precise, synchronised information to your brain. Conventional stereo speakers deliver a blurred, time-smeared approximation of the original performance. I admit of course some are much better than others but fundamentally we are in the jet age and many people still want to fly in a bi-plane with a wooden propeller.
The question is: does it have to be this way?
Despite manufacturers’ marketing claims, conventional loudspeakers have changed remarkably little in the past fifty years. Yes, they look more elegant, and the materials have improved, but the fundamental concept remains the same.
Here’s what happens inside a typical speaker: The signal from your amplifier is divided into separate frequency ranges—usually three (or two in smaller speakers). High frequencies go to the tweeter, mid-range frequencies to the mid-range driver, and low frequencies to the woofer. Each driver handles its own portion of the musical spectrum.
This sounds perfectly logical. In theory, it should work beautifully.
After understanding all these fundamental problems with conventional speakers, the question becomes obvious: Can we do better?
The answer is the Aurigen Sound Projector!
It has taken fourteen years of development to create a speaker system that eliminates these problems at their source. Not by adding more band-aids, not by using expensive exotic materials to mask the issues, but by fundamentally rethinking how a loudspeaker should work.
90% of what you hear comes from Aurigen’s pair of suspended linear arrays with a nine-octave bandwidth. Let me translate what that means:
Words struggle to convey the difference, but perhaps this testimonial says it best: The chief engineers from a professional recording studio ten miles away came to audition Aurigen.
These are people who listen to the finest studio monitors every working day. Their reference was unmistakable: this is what recorded music is supposed to sound like.
We have invented what we are calling Audio Archaeology. Revealing far more new details such as tiny drum strokes, the ability to hear right into the recordings original venue and to feel the space between the performers.All previously unheard and waiting literally years to be discovered and experienced for the first time
Just when we thought the new speaker had cured the old problems, we discovered we had created a new one.
Aurigen was so transparent, so revealing of what was in the recording, that it exposed a performance ceiling we hadn’t noticed before.
Even the finest conventional cables were now audibly limiting what we could hear. We were still in the passive domain, still dealing with components that altered the signal between the amplifier and the drivers. We had eliminated the crossover, but the cable remained and it was holding us back.
Four years of additional development led us to Ultra Litz Field Cable, inspired by the pioneering electrical work of Nikola Tesla.
Soldered joints in conventional cable create micro-discontinuities in the signal path. Ultra Litz uses a solid silver crimped terminal applied with ten tonnes of pressure this preserves signal integrity for years as oxygen is kept out, preventing degradation over time.
The result is a cable that finally gets out of the way—that delivers what the amplifier produces without adding its own signature to the sound.
We have for over the last two years have been supplying Ultra Litz Field Cable to the audiophile community in Beverly Hills and
Beyond clients who have tried everything and thought they had reached the limits of what cables could do.
Their discovery mirrors what we found: when you remove the limitations, you don’t just hear more detail. You hear more music. The emotional connection that makes us love music in the first place comes flooding back.
Remember where we started? In 1958, stereo recording promised to bring the three-dimensional experience of live performance into our homes. For nearly 70 years, that promise has been compromised by crossovers that scramble timing, materials that smear transients, and physics that locks the sweet spot to a single chair.
Aurigen, combined with Ultra Litz Field Cable, finally delivers on that original promise.

The paradigm has shifted. Welcome to the new era of passive signal management, or more accurately, the era where we’ve finally learned to get out of the music’s way.
Below is an appendix which explains the physics of the points we have raised.
For seventy years, the loudspeaker industry has followed a predictable formula: mostly dome tweeters paired with cone midrange and bass drivers. While materials have evolved, a speaker from the 1950s would still look familiar today. Over 90% of the market follows this conventional path.
But there’s a fundamental flaw in this design that profoundly affects what we hear.
Our hearing is extraordinarily sensitive—perhaps our most acute sense after our emotions. Some researchers suggest that if our vision were as sensitive as our hearing, we could see a 50-watt light bulb from 3,000 miles away in clear air. Any more sensitive, and we’d hear individual air molecules moving (Brownian motion).
We immediately notice when a photograph or film is shot without a tripod—the image is blurred, edges are fuzzy, and depth of field becomes indistinct. Yet we accept as normal the sound from conventional loudspeakers, even though they create a similar “blurring” of the sonic image.
When we hear live acoustic music (not electrically amplified), we recognise something fundamentally different. It has air and space. Images are locked in three dimensions. Instruments start and stop with a precision that reproduced music through conventional speakers cannot approach.
The difference is phase shift with frequency—a non-linear distortion of time itself.
Phase shift is typically measured in degrees at various frequencies, but what does that actually mean to your ears?
Here’s the key insight: degrees of phase shift translate to time delays, and time delays translate to apparent distance changes.
Let me make this concrete:
In conventional loudspeakers, different frequencies experience different phase shifts. This creates the illusion that parts of the speaker cone and tweeter are constantly moving backward and forward relative to your ears—in extreme cases, by as much as 8 inches (203mm).
For those interested in the mathematics:
Time delay (seconds) = (Phase shift in degrees ÷ 360) × (1 ÷ Frequency in Hz)
Distance change (meters) = Time delay × Speed of sound (343 m/s)
This apparent movement has several destructive effects:
As music plays through a conventional speaker, phase is constantly shifted by different amounts at many different frequencies. The result is a subtle sonic blur. Yes, we recognize it’s Nat King Cole, but somehow he doesn’t sound truly live. We struggle to explain why and often blame the recording.
We’ve been listening to this sonic mangling all our lives and have come to accept it as normal.
Any filter slope steeper than 6dB per octave (first order) creates a non-linear phase shift.
This means:
All of this is done in the name of “improving” the frequency response. It’s throwing the baby out with the bathwater.
If you examine the technical measurements published in serious hi-fi magazines, you may find graphs plotting “Modulus of Impedance with Phase Response.” These graphs show phase shifts in degrees (from 0° to 360°) plotted against frequency.
What you’ll typically see is a line that rises and falls dramatically—”like the Assyrian Empire” as the saying goes. Each peak and trough represents frequencies that are being shifted forward or backward in time relative to others.
A truly high-fidelity speaker would show a straight, flat line—meaning all frequencies maintain their proper time relationships.

Consider this: a highly respected loudspeaker costing $200,000 achieves zero phase shift at only seven frequencies between 10Hz and 50kHz. Is that high fidelity?
The industry often defends itself by claiming:
If you regularly listen to live music—a choir, a jazz band, a string quartet—you know your home system doesn’t sound the same. It doesn’t have to be this way.
There’s a common belief in audio circles that first-order (6dB per octave) crossovers do not create phase shifts. This isn’t quite accurate.
The truth: All crossover filters create phase shifts. However, first-order filters create linear phase shifts—all frequencies are delayed by the same proportion. Crucially, they also have a linear magnitude response—all frequencies are made louder or softer by the same amount.
This linearity is the key. When phase shift is consistent and predictable, it doesn’t destroy the time relationships between frequencies. The music’s structure remains intact.
All steeper crossover slopes (12dB, 18dB, 24dB per octave) create non-linear phase shift, scrambling the time relationships between frequencies.
Some loudspeakers use three or more drivers with first-order crossovers at each transition point—controlling bass, midrange, and treble sections. While this is more thoughtful than using steep-slope crossovers, there’s still a problem.
The attack or rise time of any instrument—even a drum or double bass—contains its initial energy and spatial cues in the upper frequency ranges. To filter these frequencies, capacitors of different values are placed in the signal path.
Every capacitor has:
You’re still hearing a modified version of the original recording, just a less damaged one.
The ideal solution minimises filtering in the critical frequency ranges where music lives and our hearing is most sensitive.
Consider a design where nine octaves—including all the crucial upper frequencies where transient information and harmonics reside—are reproduced with no filter at all in the signal path. The bass section uses a single inductor to gently roll off frequencies above approximately 200Hz at 6dB per octave, well below the range where phase shift would affect spatial cues and harmonic relationships.
This approach preserves the time relationships that make music sound real.
Conventional loudspeakers may measure well on frequency response graphs, but those graphs don’t tell the whole story. They lack the dimension of time—the when of sound reproduction, not just the what.
Phase shift with frequency scrambles the precise timing relationships that our extraordinarily sensitive hearing uses to re-create a three-dimensional sonic image and distinguish between live and reproduced sound.
It’s the jarring of our unconscious ability to decode these timing cues that makes reproduced music sound “not quite right,” even when we struggle to articulate exactly what’s wrong.
The technology exists to preserve these timing relationships. The question is whether the industry—and consumers—will demand
Need a new quote here please
blar blar blar Tweet