{"id":12789,"date":"2026-05-15T05:38:15","date_gmt":"2026-05-15T02:38:15","guid":{"rendered":"https:\/\/gear.neuropunk.ru\/?p=12789"},"modified":"2026-05-15T07:59:38","modified_gmt":"2026-05-15T04:59:38","slug":"kill-the-tube","status":"publish","type":"post","link":"https:\/\/gear.neuropunk.ru\/en\/kill-the-tube\/","title":{"rendered":"&#x2620;&#xfe0f; Kill the Tube"},"content":{"rendered":"<div class=\"wpb-content-wrapper\"><p>[vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x1f3ac; What we mean by &#8220;the tube&#8221;<\/h2>\n<p style=\"font-style: italic; color: #aaa; font-size: 1.05em;\" data-darkreader-inline-color=\"\">A thriller of haunting cup reflections, phantoms of phase distortion, and why you don&#8217;t hear them \u2014 but they&#8217;re there.<\/p>\n<div style=\"background: rgba(212, 175, 55, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #d4af37; margin: 15px 0 25px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">In Russian studio slang, we call this <em>truba<\/em> \u2014 literally <strong>&#8220;the pipe&#8221;<\/strong> or <strong>&#8220;the tube&#8221;<\/strong>. The English equivalent is what audio engineers call <strong>boxiness<\/strong> or <strong>cup coloration<\/strong> \u2014 that pipe-like cavity coloration that bleeds into everything you hear. We&#8217;ll call it <strong>&#8220;the tube&#8221;<\/strong> throughout this article \u2014 a parasitic time-domain distortion caused by reflections inside the headphone cup.<\/div>\n<p>There&#8217;s no formal definition for it and no mention in the technical literature. Anyone whose work involves sound knows what we mean: <strong>parasitic phase-and-frequency distortion inside the headphone cup<\/strong>, where a clean signal at the input ends up sounding like it was pushed through a piece of tin pipe.<\/p>\n<p>The paradox of the tube is that it&#8217;s <strong style=\"color: #ff6b9d;\" data-darkreader-inline-color=\"\">invisible on the usual graphs<\/strong>. The frequency response can look perfectly flat, and the headphones still &#8220;tube&#8221;. You put them on and you immediately hear that something is wrong with these compared to the previous pair. But you can&#8217;t put your finger on what exactly is different. On a frequency response graph both pairs can look identical. Technically it&#8217;s the same instrument. To the ear \u2014 different. <strong>That&#8217;s the tube.<\/strong> It&#8217;s felt, not described. Over time the brain learns to hear it as background noise that consumes processing resources \u2014 hours of work in such headphones are more tiring than they should be.<\/p>\n<p style=\"font-size: 1.1em; margin-top: 20px; padding: 15px; background: rgba(212, 175, 55, 0.1); border-left: 4px solid #d4af37; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">This article is about the physics of the tube. Where it comes from, why it doesn&#8217;t show up on the usual graphs, why you still hear it anyway, and what we did in <strong style=\"color: #d4af37;\" data-darkreader-inline-color=\"\">M1<\/strong> to zero it out.<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x26a1; Where it comes from<\/h2>\n<p>The planar magnetic membrane is in constant motion. Excursion is on the order of <strong>1\u20132 millimeters<\/strong> at the lowest frequencies; at high frequencies the amplitude is already microscopic. Oscillations run across the entire audio range \u2014 from tens of cycles per second in the bass to tens of thousands at the upper highs. On every cycle the membrane radiates sound <strong>in both directions at once<\/strong> \u2014 half the energy goes toward the ear, the other half goes back into the cup.<\/p>\n<p>And that&#8217;s where it gets interesting.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(255, 107, 157, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #ff6b9d; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f3be; Analogy \u2014 ball and wall<\/h4>\n<p>Throw a ball into a pillow \u2014 it stays in the pillow. Throw it into a wall \u2014 it bounces back. A sound wave is the same ball. If there&#8217;s an absorbing material behind the membrane, the wave dies out. If there&#8217;s a hard wall, it bounces back.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<br \/>\nBehind the membrane in a headphone cup there&#8217;s a back wall. And side walls. And a complex geometry of mounting hardware. Every one of these surfaces is a potential &#8220;wall&#8221; the sound bounces off of. The sound wave from the back side of the membrane hits the cup walls and returns through multiple paths. What ends up at your ear is not one signal but <strong>the original signal plus its echo<\/strong>, arriving fractions of a millisecond later.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(33, 150, 243, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #2196f3; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f4a7; Analogy \u2014 ripples in water<\/h4>\n<p>Throw two stones into water near each other, one slightly after the other. Two sets of ripples meet on the surface. Where one wave&#8217;s crest meets another&#8217;s crest \u2014 the height doubles. Where a crest meets a trough \u2014 they cancel each other out, and at that point the water doesn&#8217;t move at all. This phenomenon is called <strong>interference<\/strong>.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<br \/>\nWith sound \u2014 the same thing. The original wave and its reflection inside the cup meet. At some frequencies they add up, at others they cancel. What comes out of the cup into the ear is no longer the original signal but a distorted copy with peaks and dips at frequencies that weren&#8217;t in the recording. Plotted as a graph, this looks like a jagged comb pattern \u2014 the <strong>comb filter<\/strong>.<\/p>\n<p><a href=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2026\/05\/comb_filter.gif\" data-elementor-open-lightbox=\"no\"><img decoding=\"async\" src=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2026\/05\/comb_filter.gif\" alt=\"Comb filter \u2014 characteristic notched FR pattern\" style=\"width: 100%; height: auto; display: block; margin: 20px 0 0 0; background: #fff;\" \/><\/a><\/p>\n<p style=\"text-align: center; font-size: 0.9em; color: #aaa; margin-top: 10px; font-style: italic;\" data-darkreader-inline-color=\"\">Comb filter<\/p>\n<p>But the distorted frequency picture is only half the problem.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(255, 152, 0, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #ff9800; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f941; Analogy \u2014 a drummer behind the beat<\/h4>\n<p>In a band, the drummer sets the timing. If he hits an eighth note late on every beat \u2014 the whole groove collapses. The notes are the same, the sound is the same, but something doesn&#8217;t line up. <strong>Group delay<\/strong> works the same way \u2014 it&#8217;s the time offset with which different frequencies reach the ear. Ideally, all frequencies should arrive simultaneously. In reality \u2014 they don&#8217;t. And when one frequency lags another by a millisecond, the brain hears it as &#8220;smearing&#8221;, &#8220;lack of focus&#8221;, &#8220;muddiness&#8221;. The attack of a hit stops being a point in time \u2014 it spreads out.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(76, 175, 80, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #4caf50; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f41a; Analogy \u2014 a seashell<\/h4>\n<p>Put a shell to your ear and you hear &#8220;the sound of the ocean&#8221;, which isn&#8217;t actually any ocean \u2014 it&#8217;s <strong>resonance of the air inside the shell&#8217;s cavity<\/strong>. The bigger the shell, the lower the hum. The smaller the shell, the higher. Same principle as resonance inside a headphone cup: a sound wave enters a cavity and starts oscillating at a frequency determined by the geometry. The simplest formula for a cavity open at one end (and a headphone cup is exactly that \u2014 a cavity open toward the ear):<\/p>\n<div style=\"text-align: center; font-size: 1.8em; font-family: 'Courier New', monospace; padding: 25px; margin: 20px 0; background: rgba(212, 175, 55, 0.15); border: 2px solid #d4af37; border-radius: 8px; color: #d4af37;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\" data-darkreader-inline-color=\"\"><strong>f = c \/ (4 \u00b7 L)<\/strong><\/div>\n<p>Where <em>c<\/em> is the speed of sound in air (343 m\/s), <em>L<\/em> is the depth of the cavity in meters.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h3>What the formula gives in practice<\/h3>\n<div class=\"table-wrapper\">\n<table style=\"width: 100%; border-collapse: collapse; margin-bottom: 20px;\">\n<thead>\n<tr style=\"background: rgba(212, 175, 55, 0.2);\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\">\n<th style=\"padding: 12px; border: 1px solid #333; text-align: left;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Cavity<\/th>\n<th style=\"padding: 12px; border: 1px solid #333; text-align: left;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Depth<\/th>\n<th style=\"padding: 12px; border: 1px solid #333; text-align: left;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Resonance<\/th>\n<th style=\"padding: 12px; border: 1px solid #333; text-align: left;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Frequency zone<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Seashell<\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">5 cm<\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\"><strong>~1.7 kHz<\/strong><\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Lower mids<\/td>\n<\/tr>\n<tr style=\"background: rgba(255, 107, 157, 0.08);\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\">\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Headphone cup<\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">2 cm<\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\"><strong>~4.3 kHz<\/strong><\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Upper mids \/ vocal formant zone<\/td>\n<\/tr>\n<tr style=\"background: rgba(76, 175, 80, 0.08);\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\">\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Headphone cup<\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">1 cm<\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\"><strong>~8.5 kHz<\/strong><\/td>\n<td style=\"padding: 12px; border: 1px solid #333;\" data-darkreader-inline-border-top=\"\" data-darkreader-inline-border-right=\"\" data-darkreader-inline-border-bottom=\"\" data-darkreader-inline-border-left=\"\">Lower highs<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>This is a heavily simplified model \u2014 the formula describes an ideal rigid resonator with no losses. In a real cup it&#8217;s more complex: the membrane itself vibrates and absorbs part of the energy, the earpad acts as a soft boundary, wall materials partially damp reflections, the shape is far from regular geometry. But the idea is correct: <strong style=\"color: #ff6b9d;\" data-darkreader-inline-color=\"\">every cup has its own resonant frequency at which sound rings louder than it should<\/strong>. And if that frequency isn&#8217;t damped by design, it will color every sound in the recording with its own hum.<\/p>\n<p style=\"font-size: 1.1em; margin-top: 20px; padding: 15px; background: rgba(255, 107, 157, 0.1); border-left: 4px solid #ff6b9d; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\"><strong>Interference and its consequence \u2014 the comb filter \u2014 group delay, resonance<\/strong>: all of these together are the tube. Not a single thing, but a whole complex of phenomena. And all of them happen <strong>in the time domain<\/strong>, not at isolated frequencies.<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x1f4ca; Why you don&#8217;t see it on the frequency response<\/h2>\n<p style=\"font-size: 1.2em; padding: 20px; background: rgba(212, 175, 55, 0.1); border-radius: 8px; text-align: center;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\"><strong style=\"color: #d4af37;\" data-darkreader-inline-color=\"\">Frequency response is a still photo of WHICH frequencies the headphones reproduce. The impulse is HOW those frequencies arrive and decay over time.<\/strong><\/p>\n<p>You can&#8217;t tell from a single frame of a football game whether anyone scored or whether the kick missed. You can&#8217;t tell from the frequency response what the membrane is doing <strong>over time<\/strong>.<\/p>\n<p>To see the tube, you need graphs that show <strong>time-domain behavior<\/strong>.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(33, 150, 243, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #2196f3; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f6c0; Analogy \u2014 clapping in a bathroom<\/h4>\n<p>Clap your hands in a carpeted room \u2014 you get a short &#8220;clap&#8221; and silence. Clap in a tiled bathroom \u2014 &#8220;clap&#8221; and reverberation, echo, a hum that takes about a second to die. The loudness of the clap is the same. But the <strong>tail<\/strong> is completely different.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<br \/>\nThe <strong>impulse response<\/strong> (IR) is exactly that kind of graph. Feed an ideal short impulse into the headphones \u2014 and look at how the membrane reproduced it and how it died down afterward. In good headphones, the membrane returns to rest almost immediately after the main peak. In bad ones, it keeps twitching for several more milliseconds \u2014 and those twitches <em>are<\/em> the tube, visible in plain sight.<\/p>\n<p><a href=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2024\/06\/impuls.jpg\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2024\/06\/impuls.jpg\" alt=\"M1 impulse response, left channel, measured in REW\" style=\"width: 100%; height: auto; display: block; margin: 20px 0 0 0; background: #fff;\" \/><\/a><\/p>\n<p style=\"text-align: center; font-size: 0.9em; color: #aaa; margin-top: 10px; font-style: italic;\" data-darkreader-inline-color=\"\">M1 impulse response, left channel, measured in REW.<\/p>\n<p style=\"text-align: left; font-size: 0.85em; color: #888; margin-top: 8px; font-style: italic;\" data-darkreader-inline-color=\"\">Impulse response is a characteristic of the entire signal chain (DAC \u2192 amplifier \u2192 headphones \u2192 microphone), not of the membrane alone. For an IR to show the properties of the headphones specifically, the measurement chain must use a quality amplifier with low output impedance and high damping factor. Otherwise the graph primarily characterizes the amplifier: a low-impedance headphone membrane is poorly damped under high output impedance and keeps oscillating by inertia. The measurements presented above were captured in conditions where the contribution of the other chain elements to the IR is negligible. For context: if you measured headphones from well-known brands in the same quality category \u2014 but at 10\u00d7 the price \u2014 through the same signal chain, their impulse response would often be slower.<\/p>\n<p>This is a real M1 impulse response, left channel, measured in REW. The main peak is sharp \u2014 that&#8217;s the attack. Right after it, a negative bounce down to <strong>\u221260%<\/strong> \u2014 natural return motion of the membrane. After 100 \u00b5s, a residual positive bounce of about +20%. Then a series of oscillations with amplitude under 10%. By <strong style=\"color: #4caf50;\" data-darkreader-inline-color=\"\">one millisecond<\/strong> the level is already below 5% of the main peak (around \u221226 dB). After that \u2014 almost a flat line.<\/p>\n<p>In absolute numbers, this is <strong>one of the fastest impulse decays<\/strong> currently available in monitoring headphones. Most well-known models in this category decay substantially slower.<\/p>\n<p style=\"margin-top: 20px; padding: 15px; background: rgba(76, 175, 80, 0.1); border-left: 4px solid #4caf50; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">If you measure the period of those residual oscillations \u2014 it&#8217;s about <strong>100\u2013150 \u00b5s<\/strong>. That corresponds to a frequency of <strong>7\u201310 kHz<\/strong>. And that&#8217;s exactly the frequency <strong style=\"color: #4caf50;\" data-darkreader-inline-color=\"\">predicted by the formula for our cup geometry<\/strong>. The physics didn&#8217;t go anywhere \u2014 the 7 kHz resonance physically arises. But thanks to damping, its energy in the impulse response lives for less than a millisecond. That&#8217;s what &#8220;zeroed-out tube&#8221; means \u2014 not the absence of reflections, but the absence of their accumulation in time.<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(156, 39, 176, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #9c27b0; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f3b9; Analogy \u2014 an out-of-tune piano<\/h4>\n<p>There&#8217;s another graph called <strong>waterfall<\/strong> (or CSD, or burst decay). It&#8217;s like a piano on which you press every key in succession and watch how each note decays. On an ideal piano, every note decays equally smoothly with no foreign overtones. On an out-of-tune piano, one note rings longer than the others, another adds parasitic ringing, another dies too quickly. Same with headphones: waterfall shows how energy at each frequency dies over time. Long tails at some frequencies and fast decay at others \u2014 that&#8217;s the &#8220;out-of-tune piano&#8221;, that&#8217;s the tube.<\/p>\n<\/div>\n<p><a href=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2026\/05\/m1_waterfall.jpg\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2026\/05\/m1_waterfall.jpg\" alt=\"M1 spectral decay (waterfall)\" style=\"width: 100%; height: auto; display: block; margin: 20px 0 0 0; background: #000;\" \/><\/a><\/p>\n<p style=\"text-align: center; font-size: 0.9em; color: #aaa; margin-top: 10px; font-style: italic;\" data-darkreader-inline-color=\"\">M1 waterfall \u2014 measurements by <a href=\"https:\/\/boizoff.com\/neuropunk-m1-review\/\" target=\"_blank\" rel=\"noopener nofollow\" style=\"color: #d4af37;\" data-darkreader-inline-color=\"\">Boitsov<\/a><\/p>\n<p>On the M1 waterfall you can see the main energy (the red-orange zone) decays virtually evenly across the entire range within 5\u20136 cycles. There are no &#8220;hung notes&#8221; in problem zones. Slight oscillation around 4\u20135 kHz is the same first cup mode we saw in the impulse, but pushed 20\u201330 dB below the main energy.<\/p>\n<p style=\"font-size: 1.1em; margin-top: 20px; padding: 15px; background: rgba(212, 175, 55, 0.1); border-left: 4px solid #d4af37; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">Looking at the frequency response alone, we wouldn&#8217;t have seen this at all. On an FR graph, M1 looks flat. <strong>The time-domain graphs show what kind of work made that &#8220;flat&#8221; possible.<\/strong><\/p>\n<p><a href=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2024\/12\/graph.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/gear.neuropunk.ru\/wp-content\/uploads\/2024\/12\/graph.png\" alt=\"M1 frequency response\" style=\"width: 100%; height: auto; display: block; margin: 20px 0 0 0; background: #fff;\" \/><\/a><\/p>\n<p style=\"text-align: center; font-size: 0.9em; color: #aaa; margin-top: 10px; font-style: italic;\" data-darkreader-inline-color=\"\">M1 frequency response<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x1f9e0; Psychoacoustics: how the brain hears time<\/h2>\n<p>You might ask: if the tube sits in the range of tenths of a dB or a few milliseconds, how is it possible to hear it at all?<\/p>\n<p><strong style=\"color: #4caf50;\" data-darkreader-inline-color=\"\">It is possible.<\/strong> The mechanism is just not what you&#8217;re used to thinking it is.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(33, 150, 243, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #2196f3; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f441;&#xfe0f; Analogy \u2014 binocular vision<\/h4>\n<p>We have two eyes. Each sees its own slightly different picture. The brain compares them and from the <strong>difference<\/strong> extracts information about distance, volume, and space. With one eye, we&#8217;d see the world flat.<\/p>\n<p>Hearing works the same way. We have two ears. A sound source that isn&#8217;t directly in front of us reaches one ear slightly before the other. That difference is sometimes a fraction of a millisecond. The brain compares the arrival of sound at the left and right ear and from that <strong>difference<\/strong> builds the auditory space: where the source is, how far away it is, whether it&#8217;s moving.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<p style=\"font-size: 1.15em; padding: 20px; background: rgba(244, 67, 54, 0.1); border-left: 4px solid #f44336; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">When headphones add the tube \u2014 they <strong style=\"color: #f44336;\" data-darkreader-inline-color=\"\">break those time relationships<\/strong>. Not loudness, not frequency \u2014 time itself. And the brain feels it: &#8220;something&#8217;s wrong with the space&#8221;. The stage &#8220;compresses&#8221;, instruments &#8220;merge&#8221;, attacks &#8220;smear&#8221;. You can&#8217;t explain why \u2014 because you can only explain it if you know what to look for.<\/p>\n<p>And one more thing. The brain constantly compares what it hears with what it <strong>expects<\/strong> to hear. It has a huge accumulated database of how real impacts, voices, instruments, and rooms sound. When something in a recording doesn&#8217;t match expectations \u2014 the brain spends resources to figure it out. Not consciously. At the level of background processing. And that&#8217;s exactly why bad headphones make you <strong>tired<\/strong> \u2014 the brain is working at its limit, trying to make sense of a stream of data in which something doesn&#8217;t add up. With good headphones you don&#8217;t get tired \u2014 because <strong>there&#8217;s nothing to figure out<\/strong>, everything is in its place.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x1f3b6; Hearing is training<\/h2>\n<p style=\"font-style: italic; font-size: 1.1em; color: #aaa;\" data-darkreader-inline-color=\"\">&#8220;I&#8217;m not a sound engineer, I won&#8217;t hear the difference.&#8221;<\/p>\n<p>This is the most common thing we hear from people who learn about the tube for the first time. And it&#8217;s <strong style=\"color: #f44336;\" data-darkreader-inline-color=\"\">not true<\/strong>.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(76, 175, 80, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #4caf50; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f527; Analogy \u2014 a piano tuner<\/h4>\n<p>A tuner can tell what&#8217;s wrong with an instrument within seconds. Not because he has &#8220;golden ears&#8221;. But because over a lifetime he has tuned a thousand pianos and knows exactly how every possible defect sounds. Physiologically, his hearing is <strong>the same<\/strong> as anyone else&#8217;s. The difference is in <strong>training<\/strong>.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<br \/>\nThe same applies to hearing the tube. The first time you put on headphones without it, the most common first impression is: <em>&#8220;something&#8217;s strange, the sound feels kind of dry&#8221;<\/em>. That&#8217;s because your brain has gotten used to compensating for the tube as background noise, and now its absence feels like &#8220;something is missing&#8221;.<\/p>\n<p>After a few days of work the habit shifts. After a few weeks \u2014 you go back to your old headphones and hear the tube as clearly as a piano tuner hears a detuning. Just because you&#8217;ve trained yourself to listen in that direction.<\/p>\n<p>And that&#8217;s exactly why veteran producers are so sensitive about this topic. They don&#8217;t hear any special frequencies \u2014 they have the same physiology as anyone else. They&#8217;ve just spent thousands of hours working with sound and learned to tell when something sounds &#8220;off&#8221;. <strong>This skill has nothing to do with innate &#8220;golden ears&#8221; \u2014 it&#8217;s pure practice.<\/strong><br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x1f6e0;&#xfe0f; What we did in M1<\/h2>\n<p>When the team sat down to design the M1, the main <strong>problem<\/strong> we needed to solve wasn&#8217;t framed as &#8220;make yet another pair of headphones with a good frequency response&#8221;. The market is full of those. The main problem was \u2014 <strong style=\"color: #ff6b9d;\" data-darkreader-inline-color=\"\">to zero out the tube<\/strong>. Without that, you can&#8217;t build the kind of &#8220;see-equals-hear microscope&#8221; a producer needs to control complex fast-tempo production \u2014 high BPM, dense synthesis, interlocking drums, where every millisecond in the mix matters.<\/p>\n<p>That immediately determined the design choices. Each of them is a compromise with physics, and each was selected <strong>empirically<\/strong>, through measurements and listening. You can&#8217;t get reference-grade driver sound from pure theory \u2014 full theoretical modeling of a headphone is not analytically tractable. The Finite Element Method (FEM \u2014 where a complex shape is broken down by computer into millions of small elements and the behavior of each is calculated) is a powerful tool, but it solves the problem with major simplifications. A real cup with a real membrane on a real ear, especially at high frequencies where diffraction, partial membrane modes, and complex membrane behavior as a radiator come into play, produces deviations from the model that nothing but listening can close.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(255, 152, 0, 0.08); padding: 20px; border-radius: 8px; border-left: 4px solid #ff9800; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0;\">&#x1f35e; Analogy \u2014 baking bread<\/h4>\n<p>A bit more yeast \u2014 different crumb structure. Changed the oven temperature \u2014 different crust. Switched flour \u2014 rewrite the recipe. Many parameters, all interdependent, and the only way to find out whether it worked is when the bread is baked and you cut it open.<\/p>\n<p>Same with a driver: change the membrane tension and you have to readjust the damping. Tweak the damping and the bass sensitivity changes. Change the suspension geometry and you redo the magnetic system. And the only way to verify whether it worked is one and the same: <strong>you have to taste it<\/strong>. With bread \u2014 literally. With a driver \u2014 listen to it.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<br \/>\nOver a year of M1 development we produced an enormous number of drawings, <strong>ten construction prototypes<\/strong> of the headphone itself, and <strong style=\"color: #d4af37;\" data-darkreader-inline-color=\"\">64 membrane iterations<\/strong>. Every membrane was not only measured \u2014 it went through listening tests. The test material was dozens of reference tracks, half of which is the personal production output of one of the creators. These are tracks where every millisecond is known down to the instrument: which transient where, which phase pattern where, what exactly should be happening in every section of the wave. And by what specifically dropped out or got distorted in each new driver iteration, we could tell which way to turn the next adjustment.<\/p>\n<h3 style=\"margin-top: 30px;\">The specific solutions that made it into the final version<\/h3>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(76, 175, 80, 0.08); padding: 25px; border-radius: 8px; border-left: 4px solid #4caf50; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #4caf50;\" data-darkreader-inline-color=\"\">1. An oval cup shape with no sharp angles<\/h4>\n<p>In a rectangular or cylindrical cavity, sound bounces between parallel walls and accumulates as standing waves \u2014 the most efficient mechanism for forming the tube. In a cup with smoothed geometry, sound reflections scatter in different directions, never returning twice to the same point.<\/p>\n<p><em>Everyday analogy:<\/em> echo in a rectangular room with hard parallel walls lives long and hums \u2014 anyone who&#8217;s been in an empty room before renovation has heard it. In a room with skewed non-parallel walls (the way recording studios are professionally designed) the echo dies almost instantly.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(33, 150, 243, 0.08); padding: 25px; border-radius: 8px; border-left: 4px solid #2196f3; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #2196f3;\" data-darkreader-inline-color=\"\">2. A precisely tuned venting path<\/h4>\n<p>This is a collective name for the entire system of excess pressure release and resonance reduction: where the energy generated behind the membrane goes, how it passes through the construction elements, and in what volume it exits.<\/p>\n<p>In the full-size planar market, cup depth lies in the range of <strong>18\u201326 mm<\/strong> based on available measurements \u2014 which gives a first cup mode in the area of <strong>3\u20134 kHz<\/strong>.<\/p>\n<p>In the M1, total depth is <strong style=\"color: #4caf50;\" data-darkreader-inline-color=\"\">12 mm<\/strong> (3 mm earpad + 9 mm from membrane to back wall). This shifts the resonance from the critical 3\u20134 kHz to <strong>~7 kHz<\/strong>. At 7 kHz the wavelength is shorter \u2014 substantially easier to damp with thin acoustic materials. Sound at this frequency, even if it managed to accumulate at all, would pass through the damping layers and lose energy before becoming an audible resonance.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<div style=\"background: rgba(255, 107, 157, 0.08); padding: 25px; border-radius: 8px; border-left: 4px solid #ff6b9d; margin: 20px 0;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #ff6b9d;\" data-darkreader-inline-color=\"\">3. Even force distribution across the membrane and control of partial modes<\/h4>\n<p>A membrane of <strong>46\u00d760 mm<\/strong> isn&#8217;t a point source \u2014 it&#8217;s an <strong>area<\/strong>. If the magnetic field is distributed unevenly across it, different sections of the membrane move with different amplitudes. At certain frequencies the membrane stops moving as a single piston and breaks up into independently oscillating zones \u2014 these are <strong>partial modes<\/strong>. Each such mode adds its own ringing.<\/p>\n<p>In the M1, the magnetic system is designed so that force is distributed as evenly as possible \u2014 this removes partial modes as a source of coloration.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<p style=\"font-size: 1.1em; margin-top: 20px; padding: 15px; background: rgba(212, 175, 55, 0.1); border-left: 4px solid #d4af37; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">Together these solutions produce what you see in the <strong>impulse response<\/strong>: a clean main peak, fast decay, no long tails. And what you see in the <strong>waterfall<\/strong>: even decay across the entire range with no long-ringing &#8220;notes&#8221;.<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x1f3af; In place of a conclusion: see = hear<\/h2>\n<p>Modern production is work with the <strong>visible<\/strong> waveform. Open a DAW, zoom in, and you can drill all the way down to an individual sample. At a 48 kHz sample rate that&#8217;s a resolution of <strong style=\"color: #d4af37;\" data-darkreader-inline-color=\"\">0.02 milliseconds per pixel<\/strong>. Which means a producer today literally sees on screen events lasting <strong>tenths of a millisecond<\/strong> \u2014 transients, phase conflicts, zigzags. Sees a phase mismatch between the kick and the bass in the <strong>60\u2013100 Hz<\/strong> region \u2014 those couple-of-sample offsets that make the sub lose impact. Sees a conflict between the snare and the mid-bass at <strong>200\u2013400 Hz<\/strong> \u2014 in the zone where these instruments overlap and mask each other. Sees where there&#8217;s a clean sustain and where a micro-click appeared.<\/p>\n<p>This is historically a very recent capability. Before convenient DAWs in the early 2000s, this resolution was either unavailable entirely, or available only on extremely expensive studio gear. The producer of the 80s\u201390s could hear the problem, but couldn&#8217;t precisely see it. Real-time analyzers that show honest millisecond resolution of what&#8217;s happening to the sound right now only became widespread <strong>around the mid-2000s<\/strong>.<\/p>\n<p>And here&#8217;s the paradox that explains why this problem still hasn&#8217;t been solved at scale. Over those decades, sound engineers learned to see what&#8217;s happening to the sound at millisecond resolution. But the headphone manufacturers \u2014 especially the big legendary brands \u2014 barely involve those engineers in full-cycle development over the same years. Large corporations are inertial, and their R&amp;D is tuned for the mass market and impressive, audiophile-pleasing sound, not for accurate monitoring. Development relies on engineering intuition, on the personal preferences of the creators, on testing with random focus groups aimed at selling more \u2014 not at making it more accurate. Random hits on good characteristics happen for individual models, but they&#8217;re random, in the absence of a systematic approach.<\/p>\n<p>This gap hits both sides \u2014 producers and listeners alike. The producer spends years moving from one &#8220;legendary&#8221; model to another, in each one re-training their internal ADC, raising their perceptual thresholds \u2014 but never reaches that &#8220;see = hear&#8221; state, because no one set up the task systematically. The audiophile, lacking the skill to tell truth from coloration, also keeps spending forever: bought &#8220;legendary&#8221; headphones, listened for a year, retrained their ADC inside that specific tube, started hearing its limitations, felt that something was missing \u2014 went out for the next pair, in which a different kind of tube sits. And so it goes in a loop, year after year. The industry is perfectly tuned for this cycle \u2014 every new model is sold as a &#8220;new level&#8221;, but in reality it&#8217;s just a different coloration at a different price.<\/p>\n<p>Now the situation for the modern sound engineer looks like this:<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/2&#8243;][vc_column_text]<\/p>\n<div style=\"background: rgba(76, 175, 80, 0.1); padding: 20px; border-radius: 8px; border-left: 4px solid #4caf50; height: 100%;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #4caf50;\" data-darkreader-inline-color=\"\">&#x1f441;&#xfe0f; Seeing at millisecond resolution<\/h4>\n<p>Any modern DAW gives you this capability out of the box.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][vc_column width=&#8221;1\/2&#8243;][vc_column_text]<\/p>\n<div style=\"background: rgba(244, 67, 54, 0.1); padding: 20px; border-radius: 8px; border-left: 4px solid #f44336; height: 100%;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #f44336;\" data-darkreader-inline-color=\"\">&#x1f442; Hearing at the same resolution<\/h4>\n<p>99% of headphones on the market don&#8217;t let you do this.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<br \/>\nIt comes down to the headphones. They show &#8220;the big picture&#8221; \u2014 does the stage feel right, is the instrument recognizable. But 2\u20134-millisecond errors in a dense mix get drowned in them. Drowned in the tube. And the producer ends up in a strange position: <strong style=\"color: #ff6b9d;\" data-darkreader-inline-color=\"\">eyes see the problem, but the ears can&#8217;t catch it<\/strong>, because the headphones smear that difference into their own coloration.<\/p>\n<p style=\"font-size: 1.15em; margin-top: 20px; padding: 20px; background: rgba(76, 175, 80, 0.1); border-left: 4px solid #4caf50; border-radius: 4px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">When the tube is zeroed out \u2014 <strong style=\"color: #4caf50;\" data-darkreader-inline-color=\"\">visual and auditory resolution finally synchronize<\/strong>. You see a glitch \u2014 you hear it in the headphones. You see a 2-ms phase conflict in the bass \u2014 you hear that exact thing, not &#8220;something wrong with the bass, need to figure it out&#8221;. The information the DAW gives your eyes starts to match the information the headphones give your ears.<\/p>\n<p>That&#8217;s where the name <strong>&#8220;audio microscope&#8221;<\/strong> came from. A microscope gives the eyes a resolution they don&#8217;t naturally have. We have headphones that give the ears a resolution equivalent to what the DAW has long given the eyes. <strong>Hearing finally catches up with sight.<\/strong><\/p>\n<p>In that sense, the tube is not just an acoustic defect. It&#8217;s the <strong>gap<\/strong> between what the producer sees on the screen and what they&#8217;re capable of hearing in reality.<\/p>\n<p style=\"font-size: 1.2em; text-align: center; margin-top: 30px; padding: 25px; background: linear-gradient(135deg, rgba(255, 107, 157, 0.15), rgba(212, 175, 55, 0.15)); border-radius: 8px;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\"><strong>That gap is what we closed in M1.<\/strong><\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<\/p>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text css=&#8221;&#8221; woodmart_inline=&#8221;no&#8221; text_larger=&#8221;no&#8221;]<\/p>\n<h2>&#x2728; A final word<\/h2>\n<p>M1 is not a classic commercial product. The idea grew out of the <strong>practical needs<\/strong> of a producer who has worked in neurofunk for thirty years: high BPM, dense synthesis, jewelry-grade sound design, interlocking rhythmic parts. With material like that, headphones with the tube turn every mix into a fight against background noise that isn&#8217;t actually in the signal.<\/p>\n<p>Over those thirty years many &#8220;legendary&#8221; models from major audiophile brands passed through our hands \u2014 each promised studio monitoring and each in practice colored the sound in its own way, making it more impressive, more pleasant, &#8220;warmer&#8221;, but not the <strong>truth<\/strong>. The closest thing to honest reproduction turned out to be the <strong>Fostex RP MK3<\/strong>. Even those had their downsides because of the plastic cup and Kapton membrane: light tubeness, a smeared midrange around 300 Hz, insufficient sub depth, hyped highs that can trigger tinnitus in some listeners, and a stiff, hard headband. M1 was built with the goal of <strong>preserving honest monitoring while removing those weaknesses<\/strong>.<\/p>\n<p>Expensive planars from respected manufacturers are great headphones, and for classical, jazz, audiophile vinyl they work beautifully. But for modern EDM production they <strong>color<\/strong> the sound, making it sweeter to perceive. That&#8217;s normal and acceptable for the listener. For the producer it means the final mix is balanced against one acoustic picture, while the listener on their headphones or speakers will hear a completely different one.<\/p>\n<p>When it became clear that the instrument worked \u2014 the decision was made to set up production for colleagues and students at an accessible price. The goal: that a producer without a million-dollar studio has the ability to control sound at the same level as a producer with top-tier gear. <strong>To compete not with finances, but with hands and ideas.<\/strong><\/p>\n<p>M1 is <em>idea-driven<\/em> headphones. From user to user. Built with personal experience and the solution to personal pain \u2014 not just a desire to scam money out of the buyer.<br \/>\n[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/2&#8243;][vc_column_text]<\/p>\n<div style=\"background: rgba(255, 107, 157, 0.1); padding: 25px; border-radius: 8px; border-left: 4px solid #ff6b9d; height: 100%;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #ff6b9d;\" data-darkreader-inline-color=\"\">&#x1f3a7; For producers<\/h4>\n<p>With M1, your music will get better \u2014 you&#8217;ll hear what&#8217;s <strong>actually<\/strong> happening in it. Every phase mistake, every uncleaned artifact, every click and rustle \u2014 visible on the analyzer and audible at the same time.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][vc_column width=&#8221;1\/2&#8243;][vc_column_text]<\/p>\n<div style=\"background: rgba(212, 175, 55, 0.1); padding: 25px; border-radius: 8px; border-left: 4px solid #d4af37; height: 100%;\" data-darkreader-inline-bgimage=\"\" data-darkreader-inline-bgcolor=\"\" data-darkreader-inline-border-left=\"\">\n<h4 style=\"margin-top: 0; color: #d4af37;\" data-darkreader-inline-color=\"\">&#x1f3b5; For audiophiles<\/h4>\n<p>In M1, music becomes <strong>the truth<\/strong>. Not &#8220;warmer&#8221;, not &#8220;airier&#8221;, not &#8220;more musical&#8221; \u2014 but the way the sound engineer recorded it. <strong>The truth has to be accepted.<\/strong> Once you accept it, you don&#8217;t go back. Because once you hear what a good mix really sounds like, the difference between it and a bad mix becomes obvious. And every other pair of headphones starts to sound like varying degrees of coloration \u2014 some pleasant, some not \u2014 the truth distorted.<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>A thriller of haunting cup reflections, phantoms of phase distortion, and why you don&#8217;t hear them \u2014 but they&#8217;re there.<\/p>\n","protected":false},"author":1,"featured_media":12822,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[180],"tags":[],"class_list":["post-12789","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-rukovodstvo"],"_links":{"self":[{"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/posts\/12789","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/comments?post=12789"}],"version-history":[{"count":23,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/posts\/12789\/revisions"}],"predecessor-version":[{"id":12847,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/posts\/12789\/revisions\/12847"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/media\/12822"}],"wp:attachment":[{"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/media?parent=12789"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/categories?post=12789"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gear.neuropunk.ru\/en\/wp-json\/wp\/v2\/tags?post=12789"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}