blog.timmschoof.com2023-06-27T08:02:11+00:00http://blog.timmschoof.com/Timm Schoofhello@timmschoof.comhttp://blog.timmschoof.com//2019/11/11/fuji-blinkiesFuji, Blinkies and Everything in between2019-11-11T22:15:00+00:002019-11-11T22:15:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>This blog post consists of two parts: In the first, I explain a phenomenon about a feature that is important to understand when shooting raw with a mirrorless (or a DSLR when used in live view) camera. In the second, I point out what I find to be huge problems with Fuji’s implementation of said feature – in several ways.</p>
<h2 id="part-1-why-blinkies-are-wrong">Part 1: Why Blinkies are Wrong</h2>
<p>In my <a href="https://blog.timmschoof.com/2019/09/26/xh1-em10-features/">previous post</a>, I talked about highlight warnings, also known as “blinkies”. What I didn’t talk about there was that blinkies are wrong, most of the time.</p>
<p>Here’s why: When you review pictures in-camera, it shows you a jpg generated from the raw file. This jpg has less dynamic range than the underlying raw data, et voilá!</p>
<p>Therefore, part of getting to know your camera is getting a feeling for how much leeway there generally is, so that when you’re out shooting, you know when you’ll <em>actually</em> have lost detail in your file. Of course, this becomes especially relevant when using the technique of <a href="https://en.wikipedia.org/wiki/Exposing_to_the_right">“ETTR”, exposing to the right</a>.</p>
<h2 id="part-2-fuji-and-blinkies">Part 2: Fuji and Blinkies</h2>
<p></p>
<h3 id="the-jpg-problem">The jpg Problem</h3>
<p>Now, what’s special about this with Fuji? As I said, blinkies and histogram are derived from a jpg. Well, Fuji is big in the game of fabulous (from what I hear) jpg files <em>sooc</em> – straight out of camera. And that’s great, except for in one aspect:</p>
<p>A jpg that is intended to be a final photo is a fundamentally different <em>thing</em> than a raw file with the highest possible dynamic range, or even a jpg file approximating the same. A final picture may have… contrast. Or intense colors. Therefore, it’s not at all suitable to be the base for a feature that is intended to show the limits of a file’s dynamic range. The two goals of showing a final picture and a file’s dynamic range are at odds with each other. That’s why Fuji’s focus on jpg works out to be a disadvantage for the raw shooter in this instance.</p>
<p>So, what’s the solution? Well, there is none. But there’s a hack: A jpg look that is as “flat”, as little “opinionated” as possible. Eterna is a Fuji film profile intended for video use very flat and can be configured to be even flatter, so it’s very well suited for the inclined raw shooter’s needs in this instance. With this, the images you review in-camera are the best representation of a raw file Fuji’s current system/philosophy allows for.</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/eterna.jpg" /><br />Recommended "film simulation" setup for raw shooters</p>
<p>Check out this <a href="https://www.dpreview.com/forums/thread/4325169">dpreview forum thread</a> for some more Fuji-specific talk on this.</p>
<h3 id="the-evf-problem-vs-natural-live-view">The EVF Problem vs. “Natural Live View”</h3>
<p>But there’s one more thing: Before, I was a bit imprecise and only talked about reviewing pictures on the back of the camera, completely omitting that you gotta be looking at something <em>while shooting</em> as well! The EVF/back screen are showing video feeds. To a close approximation they’re showing something like jpgs as well, just so many per second it becomes a video. So the same rules apply.</p>
<p>Talking about Fuji specifically though, there’s good news in this regard: There’s a feature called “<a href="http://fujifilm-dsc.com/en/manual/x-h1/menu_setup/screen_set-up/index.html#natural_live_view">Natural Live View</a>”. It takes the “film look” configuration out of the equation and actually gives you an even flatter look/profile/whatever-you-wanna-call-it for operating the camera. It still shows the effects of exposure correction (and everything else would be really dumb), so it’s a really great feature.</p>
<p>With this enabled and the above mentioned Eterna-configuration, you’re good to go.</p>
<h3 id="the-contradiction-between-evf-and-playback-jpg">The Contradiction between EVF and Playback-jpg</h3>
<p>Sadly the praise ends here, because there’s already a big problem. There’s inconsistency between how a scene is represented blinkies-wise live in the EVF, and in a Playback-jpg.</p>
<p>Which means the camera contradicts itself. Also: You can’t be completely sure that the light just changed between you pressing the shutter release button and the camera taking the exposure. For all you know, the moment you saw in the “Natural Live View”-EVF/back screen has gone by, and the blinked-to-death (I’m exaggerating) playback-jpg is all you’re left with. Example:</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/evflive.jpg" /><br />Scene viewed through the back screen</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/jpgpreview.jpg" /><br />Same, "exposed" scene as playback-jpg</p>
<p>So, which one is “more right”? I’m sure this varies from scene to scene, but here’s this example scene in Lightroom.</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/lostdetail.jpg" /><br />Tons of lost detail showing in Lightroom</p>
<p>As you can see, It’s bad. The playback-jpg was “more right”. There’s only a little less detail lost than the playback-jpg indicated, and way more than the view in the EVF/back screen led us to believe. As a general rule, the raw file in the end will retain <em>more</em> detail than the playback-jpg and not less, but that’s obvious.<br />
In case you’re tripped up by “ISOA1 12800” showing in the upper grab from the back screen, that’s something I <a href="https://blog.timmschoof.com/2019/09/26/xh1-em10-features/">already talked about</a>.</p>
<h3 id="its-even-worse">It’s even worse</h3>
<p>There’s <em>even more</em> – I’m sorry. Between EVF and playback-jpg, there’s a third… “opinion”. With Fuji cameras, they stop down and do some amount of actual metering towards taking the picture upon half-press of the shutter relasse button, <a href="https://blog.timmschoof.com/2019/09/26/xh1-em10-features/">as I have already complained about, also in my previous post</a>. At that point, the metering – or at least the blinkies display – goes crazy. Or becomes more accurate? Who knows? It’s all up in the air.</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/halfpress.gif" /><br />In some situations, the blinkies increase in area on half-press of the shutter</p>
<p>To make matters even worse: In between the blackouts that are caused by the shutter when taking a photo, the camera shows the scene, and takes <em>another</em> run on guessing how much of the scene is overexposed. Shown in frame 3.</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/betweenblackouts.jpg" /><br />Just another instance of inconsistent metering</p>
<p>I don’t blame you if you lost count. We’re up to <em>four</em> differing interpretations metering/blinkies-wise of a given scene (chronological order):</p>
<ol>
<li>EVF/back screen</li>
<li>EVF/back screen upon half-press</li>
<li>In between blackouts</li>
<li>jpg in playback</li>
</ol>
<p>… what the heck?</p>
<h3 id="another-example">Another Example</h3>
<p>For illustration purposes, here’s two more real-world examples.</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/shot_a_back.jpg" /><br />Shot A as a playback-jpg from the back of the camera. Note the (circled) black blinkies indicating the amount of lost detail.</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/shot_b_back.jpg" /><br />Shot B as a playback-jpg from the back of the camera. Note the (circled) black blinkies indicating the amount of lost detail.</p>
<p>Shot A I had corrected down so that upon <em>half-press</em> I didn’t get significant blinkies. Then I decided to be stubborn, trust the metering just looking through the EVF and take shot B. Just to have an idea of the scale, as the EV+/- indicator… indicates, these two exposures are 1 stop apart.</p>
<p>(by how much) Was my trust betrayed? Let’s find out in Lightroom!</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/shot_a_lr.jpg" /><br />Shot A in Lightroom</p>
<p class="pic"><img src="http://blog.timmschoof.com/images/shot_b_lr.jpg" /><br />Shot B in Lightroom</p>
<p>It’s not easy to see only looking at shot B, but in direct comparison with shot A, there’s some lost detail, especially on the side of the boat. Zoomed in, the tree’s small branches also look very fuzzy, and there’s blue sky in the whole area behind the tree, no solid white like it shows in shot B. My trust was betrayed by ~ 1 stop, as the playback-jpgs did indicate.</p>
<p>So, based upon this example, I’d mentally “throw away” the blinkies displayed during just looking through the EVF, and only pay attention to the ones displayed after half-press, and with them overshoot +1/3 to +2/3 EV I guess?<br />
That’s workable, but I still don’t appreciate the inconsistency. If the camera can only display accurate-ish measurements after half-press, then don’t bother with blinkies before!<br />
You could do the same just based upon the playback-jpg, if you miss the feeling of using an old-school DSLR. I, for one, didn’t go mirrorless for having to go back to playback in order to know how my pictures turned out – <em>like an animal</em>.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Of course, this doesn’t prevent anyone from taking any conceivable picture, and I’m not arguing that. You have to be pretty oblivious to not think twice about losing detail in a high-contrast scene. Also, this phenomenon becomes more extreme with more exposure compensation as well as contrast in the scene (that’s my recollection, I didn’t do explicit test on it, though).</p>
<p>That being said, I don’t get why in this somewhat mature age of digital photography, an accurate overexposure warning is not on Fuji’s (or any manufacturer’s) radar. The “Natural Live View” feature seems to indicate that it is to some extent, but the utter oversight in every other aspect contradicts this.</p>
<p>What really, really bugs me is the increase of blinkies upon half-press, because it makes the camera feel unreliable to me. And this is not limited to me taking a picture of a lamp in the dark with exposure compensation +4, but happens in absolutely normal everyday situations with elevated contrast, as the shot A/shot B example shows.</p>
<p>My old Olympus E-M10, while being a <em>little</em> inconsistent between EVF and preview as well in a quick test, doesn’t show this (weird) behaviour at all.</p>
<p>I could accept this if there was <strong>a)</strong> less variation and <strong>b)</strong> one of ‘em was consistently more accurate. <strong>a)</strong> is a minus, and during my test done for this post, I found no indication of <strong>b)</strong> either.</p>
<p>Thank you for reading/skimming this far. Have you encountered the same problem? How do you deal with it? Most importantly: Was I unclear with anything? Any <a href="https://timmschoof.com/">feedback</a> is very welcome :) Especially about whatever happens or is the reasoning behind the behaviour upon half-press with Fuji cameras. Thank you!</p>
http://blog.timmschoof.com//2019/09/26/xh1-em10-featuresHow an entry-level camera can be better than a flagship2019-09-26T18:09:00+00:002019-09-26T18:09:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>I just switched from an <a href="https://www.dpreview.com/reviews/olympus-om-d-e-m10">Olympus E-M10, an entry-level micro four thirds camera from 2014</a> to a <a href="https://www.dpreview.com/reviews/fujifilm-x-h1">Fuji X-H1, a more or less top of the line APS-C model released in early 2018</a>. <em>If you’re interested in what went into my buying decision as a beginner: <a href="https://blog.timmschoof.com/2015/12/29/choosing-a-camera-in-2015/">I posted about it here</a></em>.</p>
<p>In so many ways, it’s – unsurprisingly – such a joy to use a more professional tool. While the E-M10 was and still is a perfectly fine camera, technology has advanced, and entry-level cameras are just built differently than semi- or full-on professional cameras, and have more advanced features.</p>
<p>But, by switching not only the class of camera but also manufacturers, I noticed there are differences that have nothing to do with the former. Which in this specific case means there are aspects in which the X-H1 is worse than the E-M10.</p>
<h2 id="exposure-helps">Exposure Helps</h2>
<p>The live histogram is a great feature in most <a href="https://en.wikipedia.org/wiki/Mirrorless_interchangeable-lens_camera">DSLMs</a>, as is another one that’s mostly just called “blinkies”. These highlight/underexposure warnings indicate which areas of the frame might have zero information in them because they’re all dark or blown out. Modern DSLRs mostly have that feature as well, but by the very nature of a DSLR, you’d only see them during your back LCD photo review.<br />
Anyway: The X-H1 has both of those features, but – unsurprisingly now after the above – they’re worse than on the E-M10.</p>
<p><em>Please excuse the bad photo quality, I didn’t have any means of capturing the EVF/LCD feed via HDMI.</em></p>
<h2 id="histogram">Histogram</h2>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/xh1_hist.jpg" /></a><br />The X-H1's histogram is rather small. Can you find it?</p>
<p>The X-H1’s histogram is rather small. More width really would help with judging how much information is in danger of being blown out. Plus: The image information doesn’t go all the way to the edge of the grey background of the histogram graphic. Speaking of which: The grey background is way too bright, you can’t really tell where it begins and where it ends. It doesn’t just look like that on the photo above. It’s very hard to see if there’s already information lost without changing the exposure back and forth and staring at that tiny white line. It’s a bad experience.</p>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/xh1_hist_big.jpg" /></a><br />X-H1, I appreciate the effort, but how is this useful?</p>
<p>But: There’s a “big histogram” option that is conveniently invoked by a shortcut. It shows RGB histograms as well as a “regular” one. I appreciate the better legibility. But… RGB? Come on. I know that <em>theoretically</em> the color channels can blow out independently. But I have yet to encounter/think of a situation in which that information would be helpful. If you can explain to me how the RGB histograms make sense, I’d be happy to hear it! Until then, I call bullshit on this feature.
To add injury to insult: The big histograms don’t stay on screen when the exposure is changed.<br />
That’s not only bad, that’s un<strong>fucking</strong>usable.
<br /></p>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/em10_hist.jpg" /></a><br />The EM-10's superior histogram</p>
<p>How does the E-M10 do it? The histogram is bigger, the information goes to the edge of the background and once there’s information lost, the outer line turns orange – making it a vastly superior tool for judging the exposure.</p>
<h2 id="highlight-warning">Highlight Warning</h2>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/xh1_blink.jpg" /></a><br />The X-H1's "blinkies"</p>
<p>The highlight warnings on the X-H1 work exactly as they do on a Canon 40D, with the difference of being displayed live on the screen. Right off the bat, that’s bad. Why? Well, only clipped highlights are represented, lost shadow detail isn’t. There’s no conceivable reason for that.</p>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/blinkies.gif" /></a><br />This is really stressful – I must acknowledge, though, that the frequency in this gif is exaggerated for comedic effect</p>
<p>Also: framing a shot through the EVF, thinking about all aspects of the scene is fundamentally different than checking photos on the back LCD. A distracting blinking animation (blown out highlights alternate between black and white every half second) is tolerable during the latter, but it is an absolute nuisance while framing a shot.</p>
<p><br /></p>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/em10_blink.jpg" /></a><br />The E-M10's "blinkies" in action</p>
<p>Let’s look at the E-M10: First of all, Olympus considers lost shadow detail worthy of being brought to the photographer’s attention. It is displayed blue, lost highlight detail orange. These colors don’t blink, they are shown continuously. Compared to Fuji’s system, it wins because it doesn’t make me freak out. I don’t know about you, but for me that’s always a win.<br />
I already hear the counter argument, saying that orange is ugly and distracting as well. That’s a fair point, but personally, I find the color-ness of the “blinkies” much less distracting than the constant flashing.</p>
<p>Thinking about this, I also have to think about a manual focus help, focus peaking. For this function, Fuji offers a wide variety of colors in two variants, respectively – albeit no flash on/off variant. I’d appreciate a focus peaking-inspired overhaul of the highlight warning feature.</p>
<h2 id="auto-iso-display">Auto ISO Display</h2>
<p>Changing gears away from the exposure helps: The way the “current” ISO value is displayed while on Auto ISO is handled differently between Olympus and Fuji as well.</p>
<p>The E-M10 always shows the ISO it’d take the exposure with right at that time. The X-H1 needs a half-press of the shutter button (as well as having exposure lock enabled) for that action. Without that configuration, it just shows the maximum ISO of the selected Auto ISO range.</p>
<p class="pic"><a href="URL"><img src="http://blog.timmschoof.com/images/ISO_EL.gif" /></a><br />Auto ISO maximum setting 12800, you're great. But I think we should see other people.</p>
<p>I don’t know about you, but most of the time I’d like to know which ballpark I am in ISO-wise, but specifically <em>don’t</em> need to know what the max Auto ISO is. Fuji even shows which of the three distinct Auto ISO ranges you have activated. The extra complexity of getting around that by enabling exposure lock makes it even worse if you’d like the camera to measure exposure for every single frame of a burst.</p>
<p>My guess is that this is tied to how the cameras measure the scene. The behaviour of when the lens opens and closes the aperture for <strong>a)</strong> just supplying the evf with what it needs and <strong>b)</strong> focusing is completely different between Olympus and Fuji. The X-H1 stops down all the time, while the E-M10 only closes the aperture for actually taking the photo.<br />
Not gonna lie: The Olympus way feels more modern to me. Hearing the aperture unpredictably open and close is a little bit distracting and feels… unreliable. And I’d also think that an open aperture would just give the camera the most information (light) for everything it needs to do. Except for focusing, where stopping down a bit would come with an advantage in accuracy I’d imagine - as long as there’s enough light. Which seems like something the camera could decide? But Sony cameras also have the lens stop down in normal operation I think, so… there’s that.</p>
<p>In short: I’d really love some insight in what the fundamental difference in design of the whole (sensor/metering?) system is. There are one or two forum threads where people were wondering about this behaviour, but afaik there’s no word from Fuji on that.</p>
http://blog.timmschoof.com//2018/12/28/import-Lr-ccImporting photos into Lightroom CC2018-12-28T18:12:00+00:002018-12-28T18:12:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>Recently, I took the plunge and started using Adobe’s Creative Cloud. While there’s a ton of stuff to get into with Lightroom CC on the Mac as well as on iOS, I just want to tackle one very specific topic in this post:</p>
<p>How to improve the import process (on the Mac, that is). Why does it need improving? I’m glad you asked. With Lightroom CC, pictures aren’t “touched” on import anymore: They don’t get converted to <a href="https://petapixel.com/2015/12/08/dng-the-pros-cons-and-myths-of-the-adobe-raw-file-format/">.dng</a> and aren’t renamed, as was the case with the “old” Lightroom.</p>
<p>Bummer.</p>
<p>Why do I care? Isn’t Creative Cloud the only true and relevant source anyway? Yes, kinda.</p>
<h2 id="file-format">file format</h2>
<p>But when <em>exporting</em> a file in Lr CC, the format is still relevant. Lr CC offers the option to export an <em>original with edits</em>. If I didn’t convert the files on import, I’d end up with an .orf (the name of Olympus’ RAW format) file – with an ugly .xmp sidecar file containing Lr’s edit information. Yuck. If you work with .dng files, the edits get baked into the file, so there’s no extra .xmp file. Bliss.</p>
<h2 id="file-names">file names</h2>
<p>Next up, file names. While this may show my pathological sense of order, I <em>care about file names</em>. My camera uses continuous naming scheme for photos. Dealing with that is a non starter.</p>
<p>Like any sane person, I work with my photos in Lr or some other library software only. And there’s always metadata, so the file names aren’t that relevant. But a naming scheme that reflects the date a photo was taken is a good fallback, and simply reassuring to me. When it comes to backups, sync etc. being able to look at the file and knowing the creation date just makes sense and is something I don’t want to miss.</p>
<h2 id="the-scripts">the script(s)</h2>
<p>Here’s how I solved these problems.</p>
<p>I wrote two bash scripts. One grabs the .orf files from the SD card, copies and converts them. I execute it manually via <a href="https://www.alfredapp.com/">Alfred</a>. The other one renames the files, I execute it manually <em>like an animal</em> as well, after the import and conversion script is done.</p>
<p>If you don’t use Alfred: With Apple’s Automator.app, it’s easy to make these scripts easily executable as a <a href="https://www.engadget.com/2008/01/01/mac-automation-saving-automator-workflows/">system service, or even “apps”</a>.</p>
<p>I couldn’t for the life of me figure out a non-hacky way to string these two together, that’s why I execute them manually (If you know how, let me know!).</p>
<h3 id="copy-and-convert">Copy and Convert</h3>
<p>This script uses <a href="https://helpx.adobe.com/photoshop/using/adobe-dng-converter.html">Adobe’s own, standalone DNG converter</a>. Yes, there is such a thing.</p>
<p>To work for you as well, the script needs a little adjusting: The paths to <strong>a)</strong> a temp folder for storing the .dng files (which the second script will need), <strong>b)</strong> your SD card (when it’s inserted and mounted of course). Then, <strong>c)</strong> you only need to turn the <em>.orf</em> into whatever your camera brand’s RAW format name is, so that the script grabs those files.</p>
<script src="https://pastebin.com/embed_js/vivrDk43"></script>
<p><br /></p>
<h3 id="rename">Rename</h3>
<p>This script uses EXIFtool to rename files, ant then moves them to another folder, just for a sense of order. It took me a long time to find the darn <em>4c</em> option (for ascending numeration) in EXIFtool, but in the end I did. This script uses a <em>yyyymmdd-####</em> scheme for renaming. So the first picture I take on chrismas eve 2018 is named 20181224-0001.dng.</p>
<p>The script also strips any comment from the description tag your camera might have put there (My Olympus is obnixious like that). <a href="http://web.mit.edu/jhawk/mnt/cgs/Image-ExifTool-6.99/html/install.html#OSX">Get EXIFtool</a> if you don’t already have, adjust folders as neccessary and naming to your liking and you’re all set.</p>
<script src="https://pastebin.com/embed_js/hLtHSpHh"></script>
<p><br /></p>
<p>I hope this is helpful. Feedback is very welcome! Cheers.</p>
http://blog.timmschoof.com//2018/07/24/no-podcast-platformInstagram, but for Podcasts2018-07-24T07:07:00+00:002018-07-24T07:07:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>Anchor CTO Nir Zicherman wrote on medium that <a href="https://medium.com/@NirZicherman/why-you-should-never-pay-for-podcast-hosting-9c39becd7cf7"><em>You Should Never Pay For Podcast Hosting</em></a>. I find this quite dishonest. Here’s why.</p>
<p>Anchor, let’s call them a “podcast startup”, is VC-backed, but ultimately wants to make money… probably by selling ads. On the face of it, there’s nothing wrong with that. Probably the majority of podcasts I listen to are ad-supported or ad-financed, and I find that makes for quite an honest and sustainable transaction.</p>
<p>So, why am I so antsy when it comes to Anchor?</p>
<p>Because they are going to inject themselves in between you as a podcaster and your audience, serving ads. While podcasts “hosted” on Anchor are <em>real</em> podcasts in the sense that they have an RSS feed and can be subscribed to in any podcast app, <a href="https://anchor.fm/features#collaborative">they push features like <em>Voice messages</em> and <em>Applause</em></a>. They also call themselves a <em>podcast platform</em>. I bet it’s not long before Anchor reminds you that “Anchor Podcasts” are “<em>best experienced in the Anchor App</em>”.</p>
<p><a href="https://sixcolors.com/link/2018/07/why-you-should-never-pay-for-podcast-hosting/">Jason Snell pointed out YouTube</a> as an example of how something like that might look like (as well as its dangers). I’d go a step further and point to Facebook’s Instagram and <a href="https://twitter.com/TwitterAPI/status/1021475503549677569">Twitter</a>, both at various stages of struggling with, or failing at making money from ads without annoying their users.<br />
This is critical: Once your podcast host (= a relative straightforward website/filehost) becomes a <em>platform</em> with that kind of business model, you and your podcast have a big problem as soon as the monetization doesn’t quite work out as planned.</p>
<p>Everything that gives me pause about Anchor is amplified by the messaging. It is one thing to advertise what you think you’re good at, or point out a low price. But calling a straightforward hosting service an “outdated business model”, proclaiming that one’s “singular mission” is to “democratize audio”, right after not answering the question what your business model is, all with the goal of inflating a VC-financed bubble – that is just Grade A BS. The post is even oversimplified to the degree that Anchor’s team apparently doesn’t need a salary – lucky them. All of this to sell you on Anchor being <em>free</em> <strong>free</strong> <strong>FREE</strong>, compared to other beginner plans that are 10 whopping bucks and make you an <em>actual</em> independent podcaster? Get.a.grip.</p>
<p>Another thing puzzles me. Anchor emphasizes wanting to lower the barrier to podcasting as well as making it generally easy to start one. Alright, that’s applaudable. But this doesn’t lend itself well to content from experienced podcasters/producers – <a href="https://anchor.fm/tos">regardless of the fact that they’d be unwilling to sign away their content anyway</a> (read “License Grant”).<br />
With just a slight exaggeration to illustrate my point, despite the danger of sounding like a cranky old, and without meaning offense: A whole bunch of people’s first 10 podcast episodes, recorded with their iPhones may not be the best that podcasting has to offer – as well as the best for creating ad revenue.</p>
<p>“<em>All podcasters, from those with massive followings to those who are just starting out, will be able to make money off of their work.</em>” – 🧐</p>
<p>Let me explicitly say that I am in total support of lowering barriers to entry wherever possible. I find joy in explaining first (or advanced) steps in some of my hobbies (<a href="https://blog.timmschoof.com/2015/09/28/getting-into-photography-in-2015/">photography</a>, <a href="https://blog.timmschoof.com/2018/05/07/how-to-podcast/">podcasting</a>), and I think that the democratization of tools for creation in our digital age is a beautiful thing. So I don’t argue against the part where Anchor competes by offering simple tools for creating audio, or distribution.<br />
But looking at Twitter right now, and how Instagram has influenced photography, I just have to hope that Anchor doesn’t catch on.</p>
<ul>
<li><a href="https://manton.org/2018/07/23/anchor-on-free.html">Manton Reece: <em>Anchor on free podcasting</em></a></li>
<li><a href="https://sixcolors.com/link/2018/07/why-you-should-never-pay-for-podcast-hosting/">sixcolors.com: <em>There’s No Such Thing As Free Podcast Hosting</em></a></li>
<li><a href="https://twitter.com/StephenWilson/status/1021508797599281153">Apple Podcast’s Steve Wilson tweets</a></li>
<li><a href="https://twitter.com/tschoof/status/1021511534126747649">This article condensed into a(n early morning) tweet</a></li>
<li><a href="https://twitter.com/davewiner/status/1005860107022979072">Dave Winer has an important point</a></li>
<li><a href="https://birchtree.me/blog/its-not-netflix-for-podcasts/">Matt Birchler on a different, but kinda related topic: Premium Podcasts</a></li>
</ul>
http://blog.timmschoof.com//2018/05/07/how-to-podcastHow to Podcast2018-05-07T16:30:00+00:002018-05-07T16:30:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p class="pic"><img src="https://blog.timmschoof.com/images/pc_006.jpg" /></p>
<p><a name="Introduction"></a></p>
<p>After sinking what must be hundreds of hours into podcast production, I think I can give a few pointers for podcasters looking to improve their sound and/or workflow. This is probably a little heavy for beginners. But maybe also just something to come back to :)</p>
<p>I record a podcast with a friend, in a room of my flat. My setup and preferences are derived from that, so <a href="https://www.urbandictionary.com/define.php?term=ymmv">ymmv</a> as they say.</p>
<p>Disclaimer: Since I’m not a pro, take all of this with a grain of salt. If you’re a pro or have more insight into a specific aspect, I’d be more than appreciative if you <a href="htts://timmschoof.com/">contacted</a> me for corrections, hints etc.</p>
<p>I’m gonna go through <em>all</em> the different areas (which is probably a bad idea). I’m gonna do my very best to:</p>
<ul>
<li>describe <em>why</em> a certain aspect is important,</li>
<li>show/explain <em>how I</em> deal with it, and then</li>
<li><em>link the best resources</em> on it</li>
</ul>
<p></p>
<h3 id="index">Index</h3>
<ul>
<li><a href="#Introduction">Introduction</a></li>
<li><a href="#AudioChain">Audio Chain</a></li>
<li><a href="#Record">How to Record</a></li>
<li><a href="#microphones">Microphones</a></li>
<li><a href="#Hardware">Other Hardware</a>
<ul>
<li><a href="#Recorder">Digital Recorder</a></li>
<li><a href="#Interface">Interface</a></li>
<li><a href="#Headphones">Headphones</a></li>
<li><a href="#Stand">Microphone Stand</a></li>
</ul>
</li>
<li><a href="#Software">Software</a></li>
<li><a href="#Editing">Editing</a></li>
<li><a href="#Mixing">Mixing</a>
<ul>
<li><a href="#NoiseReduction">Noise Reduction</a></li>
<li><a href="#NoiseGate">Noise Gate</a></li>
<li><a href="#EQ">Equalizer</a></li>
<li><a href="#Compression">Compression</a></li>
<li><a href="#Limiting">Limiting</a></li>
</ul>
</li>
<li><a href="#Loudness">Loudness Standards</a></li>
<li><a href="#Exporting">Exporting</a></li>
<li><a href="#Hosting">Hosting</a></li>
<li><a href="#dbx">Live signal processing with a dbx 286</a></li>
<li><a href="#Resources">More Resources</a></li>
<li><a href="#Thanks">Thanks</a></li>
<li><a href="#MyPodcast">My Podcast</a></li>
</ul>
<h2 id="audio-chain">Audio Chain<a name="AudioChain"></a></h2>
<p>For all of the following to make more sense, let’s do a little ground work. The journey the sound takes goes something like this: The podcaster’s voice is going into a microphone. That signal needs to get amplified. This might happen in an interface (in its microphone preamps). The signal then goes into a PC where some recording software – DAW (Digital Audio Workstation) – picks it up. Then some post processing happens, maybe some cutting, a file is exported… and a podcast episode is born!</p>
<p>There are many variations to this. But essentially, that’s it.</p>
<h2 id="how-to-record">How to Record<a name="Record"></a></h2>
<p></p>
<h3 id="quality">Quality</h3>
<p>Long story short: I record 24 bit. 16 bit are probably enough though, <a href="http://www.producenewmedia.com/16-bit-audio/">Paul argues</a>. In my humble opinion: If this is what you’re arguing over, <strong>a)</strong> your podcast sounds perfect or <strong>b)</strong> there’s bigger fish to fry. Don’t worry about it for now, I’ll <a href="#dither">come back to it later</a>.
And the Hz? To my knowledge, there is no practical benefit to recording more than 44,1kHz.</p>
<h3 id="levels">Levels</h3>
<p>Before considering gear, let’s take a moment and think about what the goal is in the recording stage. Compared to everything there’s to think about later, it’s fairly straightforward.</p>
<blockquote>
<p>You want to capture the spoken words in the best way possible</p>
</blockquote>
<p><em>Best way possible</em> in recording terms means <em>as loud as possible, but not too loud</em>. Okay. <em>too loud</em> <a href="https://en.wikipedia.org/wiki/Clipping_(audio)">means <em>clipping</em></a>, which means <em>distortion</em>. <strong>Which you don’t want</strong>. Therefore, in order to protect against <a href="https://manual.audacityteam.org/man/glossary.html#clipping">clipping, which means the signal exceedig 0dB</a>, you need some headroom. -6 to -12 dBFS as <em>maximum</em> signal level is recommended. <a href="https://en.wikipedia.org/wiki/DBFS">Wikipedia explains what dB<em>FS</em> means</a>. My understanding: It specifies the digital range of levels.</p>
<p>Since every speaker (and every microphone) is different, this of course means that there’s just no way to figure it out in advance. A good way is to just have every speaker talk for a bit. What I imagine must be hardest for non-pros is to be careful enough. I bet the more professional you are, the more you keep the level on the quiet side.<br />
Two factors: People tend to get more excited during a conversation than during test sentences. And: clipping is worse than a recording that’s too silent.<br />
The sad news here is also good news: There’s nothing like experience. And experience might come in the form of simply more and more recordings with the same speakers. If you’re recording the same group of people, just iterate: Write down the levels you set, and after the recording check the files and look at the max levels with some kind of meter. You’re looking for the “True Peak” reading. Let’s worry <a href="#Limiting">later</a> about what exactly this means.</p>
<p>If you notice clipping <em>during</em> the recording (usually a red light on whatever recording device you’re using) save the current recording and reduce the levels.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/truepeak.png" width="480" /><br />Some risky peaks</p>
<p>This is “a little hot”, as the pros say (I think). -5 is <em>just</em> okay, but -2,5 is pushing your luck. But, good news: This is not a failed recording by any means.</p>
<p>I’m not giving the recording levels I use, because they’re meaningless for everybody but me in my exact recording setup. But: Depending on the type of microphone you use, the right setting might be 90-95 or even 100% of gain, so don’t be afraid to go there.</p>
<p>This is mostly derived from this article: <a href="https://www.noterepeat.com/articles/q-and-a/20-gain-staging-101">Noterepeat.com: Gain Staging 101 – How to get a clean loud signal</a>.</p>
<h3 id="recording-space">Recording Space</h3>
<p>Consider the room you want to record in. Think about a bathroom, or some noisy restaurant: The less fabric and the more hard surfaces there are, the more echo you’ll have. This is bad for podcasting. You don’t need to build a sound booth though, just consider some things and you’ll do fine. Ray talks about the room <a href="https://www.youtube.com/watch?v=3cckIFFeZcU&list=PLnkXFSi7atP6ZxCLAMLFKrs6ps0OeXarD&index=2">specifically in this video</a>.</p>
<h3 id="microphone-technique">Microphone Technique</h3>
<blockquote>
<p><em>Know the instrument, love the instrument – and use it wisely</em></p>
</blockquote>
<p>Take some time to consider <em>how</em> to speak into a microphone. It’s not as easy as it seems! And once it’s easy, it’s not easy to be really consistent.</p>
<p>I really like <a href="https://transom.org/2016/p-pops-plosives/">this post on transom.org</a>. It says it’s about “plosives”, but covers a lot of ground.</p>
<p>There’s also this lovely video (quoted above):</p>
<div class="videoWrapper-4-3"><iframe width="1024" height="768" src="https://www.youtube.com/embed/K8GBbvo-J9U" frameborder="0" allowfullscreen=""></iframe></div>
<p>My takeaways:</p>
<ul>
<li>Get as close to the microphone as you can without introducing breathing noises or heavy bass from the <a href="http://www.neumann.com/homestudio/en/what-is-the-proximity-effect">proximity effect</a></li>
<li>Generally, putting the microphone slightly off-axis helps with plosives (the post on transom.org explains what I’m talking about)</li>
<li>Don’t vary distance or position of the microphone relative to your mouth throughout the recording. You’ll have enough variation in volume to deal with simply from being more or less excited while talking.</li>
<li>The previous doesn’t apply if you laugh loudly, or scream: For these instances, try to train yourself to increase distance to the microphone so that the recording doesn’t clip. It’ll absolutely still come across in the recording, don’t worry.</li>
</ul>
<p></p>
<h3 id="headphones-for-monitoring-and-editing">Headphones (for monitoring and editing)</h3>
<p>While you record, you should monitor what you’re capturing. It could be argued that once the proper input gain is set, there’s no need for that – but you’re on the other side of that argument once you discover that you’ve captured a high pitched noise with your signal in that otherwise exquisite one-hour conversation.</p>
<p>For checking what you’re recording, honestly any pair of headphones works as long as it’s not letting through too much sound. But you need monitoring headphones anyway, for editing.<br />
What is different about monitors compared to “regular” headphones? Well, monitors are meant to be neutral, not color the sound themselves. If you like all of your music with some extra bass, you dial that in on your receiver’s EQ. But imagine an audio engineer would do the same <em>while mixing/mastering</em>. Everybody <em>but</em> her would receive an end product mixed with <em>too little</em> bass.</p>
<p>This goes into mixing and mastering as well, but this much I’ll say here: Remember that you’re producing your podcast to be listened to in any situation. Over a car stereo, through earbuds on a train, through a smartphone speaker turned up all the way, anything. You can – and should – still make decisions on what your podcast should sound like, absolutely! But those can’t be based upon one level of indirection – namely your favorite headphones for listening to EDM.</p>
<p>I went with the widely recommended <a href="https://www.amazon.de/Sony-MDR7506-MDR-7506-Studio-Kopfh%C3%B6rer-geschlossen/dp/B000AJIF4E/ref=sr_1_3?">Sony MDR-7506</a> and haven’t regrettet that decision. Maybe one thing: The coating of the ear cushions “dissolved” after some time. <em>Other than that</em> they’re solid. I’ve had them for years, haven’t treated them carefully, and they show no sign of wear.</p>
<h2 id="microphones">Microphones<a name="Microphones"></a></h2>
<p>Aaaaah, microphones. The never ending topic. Good news: It’s not as complicated as you might think from all the talk. <a href="https://marco.org/podcasting-microphones">Marco’s guide</a> is the best place to start.</p>
<p>There are two basic types of microphones, condenser and dynamic microphones, and it’s good to know the basics. Others have written everything there is to write about this and how it’s relevant for podcasters: <a href="https://marco.org/podcasting-microphones#condenserdynamic">Marco’s take</a> is as good as any. Both dynamic and condenser Microphones come in the USB and the XLR variety, <a href="https://marco.org/podcasting-microphones#xlrusb">Marco also writes about this</a>.</p>
<p>Quite some time ago I made <a href="https://www.youtube.com/watch?v=AsgWldhZSIM">a little video</a> in which I recommended the Blue Yeti. It’s quite popular (the Yeti, not my video), and it’s the <a href="https://thewirecutter.com/reviews/the-best-usb-microphone/">Wirecutter pick</a> (for USB microphones).<br />
But: I was wrong. It <em>has</em> nice sound (because it’s a large diaphragm condenser microphone). But if you’re not in a studio or don’t want to build a pillow fort every time you’re recording, rejection of room noise and echo becomes an important quality in a microphone.</p>
<h3 id="xlr">XLR</h3>
<p>The <strong>Shure SM58</strong> is a solid microphone for podcasting. It doesn’t sound as clear as the Blue Yeti, but it makes recording two people in one room feasible with minimum effort. And it sounds absolutely more than “good enough”.<br />
Only bad thing: It does need quite a bit of gain, so it should be paired with an at least “okay” microphone preamp (either extra, or built into an interface or recorder, more on that later). For 100€ new, and 50-60€ used, you can’t go wrong. <a href="https://fakesm58.wordpress.com/">Beware of fakes, though</a>.</p>
<p>The <strong>Shure Beta 58A</strong> is an “updated” SM58, and hence also solid. Beware: It’s not necessarily <em>better</em> than the SM58. Depending on voice and microphone technique of the speaker, the sound characteristics (more high end) and more directional pickup pattern may or may not be an advantage.<br />
I mainly tried the Beta 58A because it offers more gain than the SM58, about 4 dB – what I didn’t know was that the signal-to-noise-ratio is <em>the same</em> as the SM58’s. So the noise got louder with the signal. I still like the Beta 58A for my voice, though. For 175€ new, and 80-140€ used, it can absolutely be worth it. <a href="http://www.ziggysono.com/upload/104ShureAgainstCouterfeit.pdf">Beware of fakes as well!</a></p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_002.jpg" /><br />Shure Beta 58A – mine lacks the trademark blue rubber ring on the cage</p>
<p>Right now I’m using the <strong>Shure Beta 87A</strong>, which is a condenser microphone – pause for effect – but allegedly with the good characteristics of a dynamic microphone! This I can corroborate. The rejection is great. When you turn your head, the drop in volume is tremendous, which also means less opportunity for nasty room echo to get in there.</p>
<p>I wasn’t immediately impressed by the sound quality. I guess I expected Blue Yeti-level clarity which the 87A probably can’t deliver because it’s a small condenser(?). But it does sound “nicer” than the 58s.<br />
What I found curious is that it also needs <em>almost as much</em> gain as the dynamic microphones. Overall, I’m not totally overwhelmed by the Beta 87A, but still think I’ll keep it. Or at least use it for some time to really get used to it. For 300€ new and about 220€ used, this is not a win as clear-cut as I’d like it to be.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_009.jpg" /><br />Shure Beta 87A</p>
<h3 id="usb">USB</h3>
<p>If an USB microphone is for you and you’re in the US, the <a href="https://www.amazon.com/Audio-Technica-ATR2100-USB-Cardioid-dynamic microphone-microphone/dp/B004QJOZS4">Audio Technica ATR 2100</a> seems to be a tremendous pick as well as a great value. In Europe, you can’t get it. But it’s supposed to be the same as the EU-available <a href="https://www.amazon.de/Samson-Recording-Handmikrofon-Cakewalk-software/dp/B001R747SG">Samson Q2U</a>.<br />
I have no experience with these microphones, so I can’t speak to them. And with the EU version, the deal isn’t quite as great.</p>
<h2 id="other-hardware">Other Hardware<a name="Hardware"></a></h2>
<p></p>
<h3 id="digital-recorder">Digital Recorder<a name="Recorder"></a></h3>
<p>The sound has to be recorded <em>onto</em> something. This is either your PC, or a digital recorder. Let’s focus on digital recorders. They’re great. With a digital recorder, your podcast setup can become mobile!</p>
<p>Besides usability and some ruggedness, for podcasters the most important property is how much <strong>noise</strong> the preamps generate. Especially if you use a dynamic microphone microphone, there can be significant noise introduced with all the gain that is needed.</p>
<p>In podcaster circles, people recommend the Zoom H6 oder Zoom H5. In my opinion, those are <strong>not suited</strong> for podcasters because there’s no real use for the built-in microphones. The budget that goes towards those can’t be spent on better internal preamps without having a more expensive product: The H5 is 280€, while my recommendation is 170€.</p>
<p>My recommendation, you ask? I like the <a href="https://www.thomann.de/de/tascam_dr_60d_mkii.htm?ref=search_rslt_tascam+60d_350125_0">Tascam DR-60D MkII</a>. It has two XLR inputs, a built-in limiter (lessens the impact of clipping), quite okay preamps and is just solid. Don’t get confused by the labeling as a recorder <em>for DSLR cameras</em>. It’s all the same thing.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_dr60.jpg" /><br />Tascam DR-60D MkII</p>
<p>If you’re wondering: In my tests, the preamps in the DR-60 are roughly equivalent to the ones in the H5, which are said to be identical to the ones in the H6. The 60D MkII, again, is 170€ and a very suitable (temporary) home for your audio files. If you need more inputs, the <a href="https://www.thomann.de/de/tascam_dr_70d.htm">DR-70D</a> offers that and besides that the same as the 60D.</p>
<p>I don’t have any (but would like some) experience with the <a href="https://www.sounddevices.com/products/recorders/mixpre-3">SoundDevices Mixpre-3</a>, but they’re not really available in Europe and… “kind of” expensive. Just in case you were wondering if there’s something better: With audio, there always is.</p>
<h3 id="interface">Interface<a name="Interface"></a></h3>
<p>If you decide to go the XLR route and want to record onto your PC, you’ll need an interface.</p>
<p>Hands down, I think the Tascam US-2x2 is great. <a href="https://sixcolors.com/post/2016/04/low-cost-usb-audio-interfaces-review-cheap-xlr/">Jason thinks so too</a>, <a href="https://thewirecutter.com/reviews/best-usb-audio-interface/">as well as the Wirecutter</a>. Who’d be arguing against this? It’s 125€.</p>
<p>A little more color on the US-2x2: The microphone preamps are <a href="http://tascam.com/product/us-2x2/">one class up</a> (“Ultra-HDDA”) from <a href="http://tascam.com/product/us-2x2/">the ones</a> in the DR-60D MkII (“HDDA”). As an interface connected to your PC it’s not as portable. Have your priorities in order and make a decision, both are great picks.</p>
<h3 id="microphone-stand">Microphone Stand<a name="Stand"></a></h3>
<p>If it fits your room at all, I recommend mounting your microphone on a stand. The <a href="http://www.rode.com/accessories/psa1">Røde PSA1</a> is nice, but not suited for lighter microphones in my experience.<br />
I’m happier with the <a href="https://produkte.k-m.de/en/product?info=523&x53ea2=bcad5f985c86a4c69fefbf354bf231d8">K&M 23850</a>, but only after modding the thread that screws to the microphone holder. The piece of metal that holds the screw is just wedged in between two non-circular pieces, making it not react well to little adjustments. <a href="https://twitter.com/tschoof/status/987685506413744128">I fixed it with two shims</a>.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_micarm.jpg" /><br /></p>
<h2 id="software">Software<a name="Software"></a></h2>
<p>From full-on DAWs to some nice little apps, there’s a lot that can help you in creating your podcast.</p>
<h3 id="logic-pro-x-200">Logic Pro X, 200€</h3>
<p>I use Apple’s Logic Pro X. It’s <em>definitely</em> not the best tool for podcasts, but it’s what I ended up with. Further down I’ll name a few apps that fit the bill better.</p>
<p>LPX is built for music, and it shows. But can do some rearranging to make it better suited for podcast editing. Brett Terpstra has a <a href="http://brettterpstra.com/2017/12/12/a-few-tips-for-podcast-editing-in-logic/">nice blogpost on UI Tweaks, strip silence, presets, key commands, and <em>ripple delete(!)</em></a>.<br />
Chase Reeves <a href="https://www.youtube.com/watch?v=vFBb0Olr1D8">covers some of the same ground on YouTube</a>, I found his angle useful as well.</p>
<p>In case you end up using Logic: <strong>Strip Silence</strong> is a great feature. It removes the area of a specific track where nothing happens anyway. This makes it very efficient to deal with gaps in a recording. Sometimes I have to chop up a sentence or even a single word – but most of the time, after applying this feature, I just move around whole sections.<br />
But: I think it’s <em>very</em> buggy. I always have to apply it with longer Pre Attack/Post Release-Times before it’ll properly apply the right ones. Just in case it’s buggy for you as well.<br />
The settings I use: 4%; 0,6; 0,2; 0,4.</p>
<h3 id="audacity-free">Audacity, free</h3>
<p>Audacity is free, and it’s very powerful. I use it for recording, and it’s fine for file conversion and all kinds of little jobs that come up once you’re dealing with audio.<br />
But, and this is a big <em>but</em>, for real editing it’s no good at all, because it only does <em>destructive</em> editing. You’re going to be sad when you apply some effects, edit a whole episode and then discover that you accidentally chopped up the first sentence. You’re going to have to do it all.over.again. That’s when you’ll “graduate” from Audacity to something else at the latest.</p>
<p>But, as I said, Audacity is a versatile tool. Further down I explain how I have used Audacity as a noise reduction tool to pretty good effect. I also <em>still</em> use it simply for recording on my laptop. While it is not pretty by any stretch of the imagination, it allows to show big db-meters (Logic does not).</p>
<h3 id="sox-free-but-a-command-line-utility">SoX, free (but a command line utility)</h3>
<p>For repeating tasks, like splitting a stereo wav into two mono files, Audacity sadly isn’t automation-friendly at all. Enter <a href="http://sox.sourceforge.net/">SoX</a>. If you’re at all comfortable in the command line and scripting, it’s a great way not to have to do annoying repeating tasks <em><a href="https://overcast.fm/+IpmIw6Vo/1:31:42">like an animal</a></em>.</p>
<p>To save you some time in case you have the same idea as me: For a brief moment I thought <a href="http://www.zoharbabin.com/how-to-do-noise-reduction-using-ffmpeg-and-sox/">I found the silver bullet</a> – SoX also offers noise reduction! But only nominally. Artifacts upon artifacts, not usable at all. More on noise reduction <a href="#NoiseReduction">later</a>.</p>
<h3 id="others">Others</h3>
<p>I’ve only used Audacity and Logic Pro X, so I can’t really speak in depth about other software for podcast editing. But I want to mention them to give you a jumping off point.</p>
<ul>
<li>
<p>GarageBand, free with a Mac
If you’re on a Mac, there’s no harm in starting out in GarageBand. Check out <a href="https://sixcolors.com/post/2015/02/how-i-podcast-editing/">Jason’s post on (editing in general and) GarageBand vs. Logic</a>.</p>
</li>
<li>
<p>Reaper/<em>Ultraschall</em>, 70€
<a href="https://www.reaper.fm/">Reaper</a> always comes up, and probably for a good reason. While it doesn’t look especially beginner-friendly, it packs raw power, and there are tons of tutorials available.<br />
There also is the <a href="https://ultraschall.fm/">Ultraschall</a> project, which makes Reaper much more suited for podcast production. Ultraschall comes from a mostly German crowd, so most of it is in German. But! More and more seems to be <a href="https://github.com/Ultraschall/ultraschall-3">happening in English as well</a>.</p>
</li>
<li>
<p>Hindenburg Journalist, 85€
<a href="https://hindenburg.com/products/hindenburg-journalist">Hindenburg</a> is also mentioned often, definitely built with podcasters in mind and offers nice features.</p>
</li>
<li>
<p>Adobe Audition, 24€/month
As well as Logic Pro X, it’s probably a bit overkill for podcast editing. But it has decent noise reduction built-in, so that’s something. The subscription price is really steep for non-professionals, though.</p>
</li>
</ul>
<h3 id="wise-words">Wise Words</h3>
<p>Whatever software you end up using, take time to get comfortable with it. I found that many, many aspects of podcast production are repetitive. This means they should either be automated, or if that’s not possible then at least handled in the most efficient way possible. So using <strong>templates</strong>, <strong>keyboard shortcuts</strong> and any other feature that saves you from repetitive manual labor is crucial.</p>
<p>For Logic Pro X, I already mentioned (again, check out <a href="http://brettterpstra.com/2017/12/12/a-few-tips-for-podcast-editing-in-logic/">Brett Terpstra’s post</a>) Templates, Strip Silence and Ripple Delete. This last one sadly may not be feasible if you are working on more than three tracks at a time.</p>
<h2 id="editing">Editing<a name="Editing"></a></h2>
<p>This one’s totally up to you. How much you want to craft your mono/dialogue, how many “um”s you take out – it’s your decision. Successful podcasts are found in all varieties, from three-hour ramblings to tightly edited 20 minutes.</p>
<p>I talk about methods, hard- and software in this post. But once you arrive at a setup, the editing is where most of the time goes. While everything else is important for how your podcast sounds, the editing brings the content in shape for a really good listening experience.</p>
<p>I spend 2-4 hours per episode, and a considerable amount of is listening through and straightening out the conversation. “Um”s, fillers, false starts, sentences that go nowhere, chop chop! This inevitably also has to do with me and my co-host not being professionally trained, not talking into microphones for a living. <a href="https://www.youtube.com/watch?v=bAZqE2DnxUI">But then again, who does</a>?</p>
<p>You may go for “less editing”, and more power to you. One word of advice though: While I support the idea that not everything has to be crafted to the n-th degree, you’re asking your listeners for their time. In my opinion, you should have an awareness of that. If you think the appeal of your show is your <em>raw, unedited</em> personality – better make sure that’s really the case.</p>
<p>My point is that, when in doubt, you should be a ruthless editor.</p>
<p></p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_mic_fill.jpg" /><br /></p>
<h2 id="mixing">Mixing<a name="Mixing"></a></h2>
<p>Oh boy, mixing. It’s not as hard as you think. But it’s also far from trivial. You probably need to do less than you think. But to get it right<em>ish</em>, it might take some failed experiments. But it’s also fun.</p>
<p>Some wise words: For audio goes the same as for any kind of production. If you can get it right in camera/during the recording/capture of any kind, it’s (almost) always much more elegant to do so. Just because digital capture allows for some remarkable post processing, don’t rely on it. A good capture will result in a better product earlier in the process – and also give you even more flexibility in post.</p>
<p>So, keep coming back to the basics until you’ve nailed them: Record one track per speaker (okay, you don’t need to come back to this one a lot), reduce background sounds, reduce room echo, get a good signal level, improve microphone technique.</p>
<h3 id="mixing-and-mastering">Mixing and Mastering</h3>
<p>I wish someone had told me about the difference between <a href="https://www.sageaudio.com/blog/mastering/what-is-the-difference-between-mixing-and-mastering.php">mixing and mastering</a> earlier. Short version: Mixing ist what you do to each recorded track individually to get the most out of it, processing-wise: Noise removal, EQ, compression etc. for podcasts.<br />
Mastering is what is done to the whole production to make it coherent. Since these terms come from the music world, in mastering the loudness and sound of the tracks of a whole album are considered. For podcasting, I’d say this mostly manifests itself in consistency from episode to episode in terms of loudness.</p>
<h3 id="stacking-order">Stacking Order</h3>
<p>I’ll talk about effects I use in the order that I use them in. Even though most of this happens digitally, the order is not arbitrary at all! The signal changes by each effect. Depending on the particular effect, its behaviour and/or effectiveness can be altered dramatically by how the signal was processed before. So what’s the principle to go by?</p>
<blockquote>
<p>“Cut first, enhance later”</p>
</blockquote>
<p>The philosophy is as follows: Get rid of unwanted sounds early in the chain so they don’t come back to bite you when the signal is fed into the next effect. For example: Imagine there are high frequencies that you might want to reduce using an equalizer. Why have the signal go through a compressor before, where these frequencies may lead to unwanted results?</p>
<p>An even simpler example: Noise reduction is something I always put on top. It just doesn’t make any sense to let all the other processing react to the noise that I want to get rid off anyway.</p>
<p>Don’t worry if you’re confused. It’ll come to you naturally once you’ve grown more comfortable with different kinds of processing.<br />
Randy Coppinger talks about stacking order in <a href="http://thepodcastersstudio.com/tps088-compression-for-podcasts-with-randy-coppinger/">this brilliant episode (mainly on compression, that section’s further down) of The Podcaster’s Studio</a> from 52:40 on (<a href="https://overcast.fm/+I1X6mUeE/52:40">Overcast timestamp link</a>).</p>
<h2 id="mixing-noise-reduction">Mixing: Noise Reduction<a name="NoiseReduction"></a></h2>
<p>Remember what I said about getting a clean recording? Well, that’s the goal. However, I haven’t come across a non-professional setup that doesn’t introduce hiss. Some might find “some” hiss not that noticeable, or at least for them it doesn’t warrant whipping out the noise reduction tools.<br />
For me, it does.</p>
<p>Practical tip: If you know what kind of noise you’re dealing with while recording, a good practice is to get a few seconds of just that pure noise while recording.</p>
<p>Noise reduction plugins work in different ways, but the tradeoff using them is always the same: Get rid of as much noise as you can <strong>without</strong> muffling the signal or introducing artifacts.</p>
<h3 id="nr-with-audacity">NR with Audacity</h3>
<p>If you want to use Audacity for NR before you feed the cleaned up audio in the DAW of your choice, that’s a perfectly reasonable thing to do. <a href="https://www.podfeet.com/blog/recording/how-to-remove-noise-with-audacity/">This</a> seems like a good detailed tutorial. Except for one thing: “<em>If you want to go wild and play with the controls, have fun, or just click OK like I do.</em>” – The sliders make <em>all</em> the difference, of course! The Audacity manual <a href="https://manual.audacityteam.org/man/noise_reduction.html">describes what they do</a> pretty well. With this information and a little experimentation, you should be able to clean up your signal a good bit. For what it’s worth: To deal with quite some hiss from (my old DR-40D’s) preamps, I went with these settings:</p>
<ul>
<li>12dB</li>
<li>Sensitivity 1</li>
<li>Frequency Smoothing 12</li>
</ul>
<p>Remember: You always risk taking away too much from the original signal, introducing artifacts. So, when in doubt: do less.</p>
<h3 id="nr-with-izotope-voice-de-noise-included-in-izotope-rx-6-elements-and-up-130--regularly-on-sale-for-30">NR with iZotope Voice De-noise (included in iZotope RX 6 Elements and up), 130$ / regularly on sale for 30$</h3>
<p><a href="https://www.izotope.com/en/products/repair-and-edit/rx/rx-elements.html">iZotope’s suites</a> seem to be the industry standard for noise reduction… for some reason. Okay, I was probably doing <em>something</em> wrong, but I found it to be lacking compared to Audacity’s NR. I had to be fairly aggressive, crank up Threshold and Reduction quite a bit, still had noise and got more muffled voices. Who knows – maybe Audacity’s NR is just so good, despite it being free.</p>
<p>But I don’t want to be purely negative: Since iZotope offers real “grown up” software, their Voice De-noise comes as a plugin (as well as part of a standalone app). Using it as a plugin, “live” in your editing app of choice is a nice perk. Instead of making all the decisions at the beginning and being stuck with them, it allows you to tweak the settings as you go.</p>
<p>In my experience, using the <strong>Learn</strong> function doesn’t offer any benefit. But the almost set-and-forget nature of the <strong>Adaptive Mode</strong> is nice. For a starting point, the settings below are what I now use for some <em>very gentle</em> denoising added after another kind of denoising (Brusfri, read below).</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/izonr.png" /><br />iZotope Voice De-Noise in action</p>
<h3 id="nr-with-brusfri-60--sometimes-on-sale">NR with Brusfri, 60€ / sometimes on sale</h3>
<p>Much more satisifed I am with <a href="https://klevgrand.se/products/brusfri/">Brusfri</a>. It works in a way that lends itself well to continuous noise like hiss or hum. Which I <em>have</em> to think is what most podcasters are dealing with, right?<br />
I was really blown away. You have to feed Brusfri some pure noise since there’s no auto/dynamic microphone mode, but <em>boy</em> does it work. In my experience, there is minimum risk of producing artifacts or muffled sound. In most cases, I even reduce the threshold to somewhere between 40% and 20% and still get stellar noise reduction.</p>
<p>Some negative: The interface is a little heavy on the hipster – which is to say it’s a little user hostile. Some sliders as well as having values readable with<em>out</em> having to hover would do wonders to its usability.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/brusfri.png" /><br />Brusfri's hipster interface</p>
<h2 id="mixing-noise-gate">Mixing: Noise Gate<a name="NoiseGate"></a></h2>
<p>While noise reduction “cleans up” the signal itself, a noise gate opens and closes (= doesn’t let signal through) depending on the signal strength. It’s handy because it allows you to only deal with the “proper” output of a channel. But it’s easy to overdo it and end up with speech that is cut off. Just remember that natural speech is not on/off, but gets louder and quieter gradually. You only want to cut off the signal when there’s nothing of the speech left in it, not just after the loudest part has gone by.</p>
<p>Let’s go over the most common settings of a noise gate. Since every recording setup and situation is different, most concrete values I use mean <em>nothing</em>. But, since most of podcasting shares the use of human voices, I’m still referencing them to just give <em>a</em> ballpark.</p>
<ul>
<li>
<p><strong>Threshold</strong>: When the signal is under the threshold, the gate is closed. This is the cutoff-point, which means that spoken word should constantly be louder than the threshold, as well as the occasional “Mh-hmm.”</p>
</li>
<li>
<p><strong>Reduction</strong>: The amount/degree <em>to</em> which the gate closes. You probably need less than you think. The goal of a gate is not to eliminate absolutely every sound in between speech, it’s just to have a nice separation between the important and the unimportant stuff.</p>
</li>
<li>
<p><strong>Attack</strong>: How fast the gate reacts to the signal passing the threshold. For speech, it probably doesn’t <em>need</em> to be as fast as 5ms, because our voices need some ramp-up. <em>But</em> I also don’t see how a fast Attack hurts.</p>
</li>
<li>
<p><strong>Hold/Release</strong>: For speech, I can’t philosophically see Hold and Release as separate values. A Hold of 100ms in isolation is probably too short, but since speech doesn’t keep on ringing like some instruments do and stays above the Threshold, it works in conjunction with a longer<em>ish</em> Release. I think a Release of up to 400ms also worked well for me.</p>
</li>
<li>
<p><strong>High/Low Cutoff</strong>: This is crossed out because it’s basically two equalizer filters that creeped in Logic Pro X’s noise gate plugin.</p>
</li>
</ul>
<p class="pic"><img src="https://blog.timmschoof.com/images/ng.png" /><br />Logic Pro X's noise gate plugin</p>
<h2 id="mixing-eq">Mixing: EQ<a name="EQ"></a></h2>
<p>Equalization is used to suppress unwanted aspects and enhance wanted qualities of a voice. Reducing some boominess or a nasal tone, push the high end a little – voilá, the EQ is your friend!</p>
<p>I find using the EQ well probably harder than it is, but boy… it’s no walk in the park for sure. Lucky us, there’s great resources out there for EQ as well.</p>
<p>First and foremost: <a href="http://thepodcastersstudio.com/tps087-eq-with-rob-williams/">Ep. 87 of <em>The Podcasters’ Studio</em> with Rob Williams</a> (<a href="https://overcast.fm/+I1VlPYuc">Overcast link</a>).</p>
<p>The exact EQ Cheat Sheet Ray and Rob talk about seems to be no more, except for <a href="https://twitter.com/reaktorplayer/status/915742013424848898">in this screenshot</a>. But Rob has a new thing going: <a href="http://prosoundformula.com/how-to-eq-vocals/">How To Eq Vocals</a>. Keep in mind that this specific tutorial – and most you’ll find – are aimed towards EQ for vocals in music, not for spoken word in podcasts. The more broad a lesson is, the more it’ll apply. Use your best judgment in implementing specific techniques.</p>
<p>What I think are the most important takeaways:</p>
<ul>
<li>Boost wide, cut narrow (Step 5 in <a href="https://music.tutsplus.com/tutorials/8-easy-steps-to-better-eq--audio-942">this tutorial</a>)</li>
<li>Cut to eliminate problems, boost to enhance good qualities</li>
<li>If your EQ curve looks like a rollercoaster, you probably went wrong somewhere – a little EQ goes a long way</li>
<li>Breaks. This goes for all kinds of audio work, but especially for tweaking EQ: Take lots of breaks. Give each tweak some time. Be gentle.</li>
</ul>
<h2 id="mixing-compression">Mixing: Compression<a name="Compression"></a></h2>
<p>Dynamics compression in podcasting is done to improve intelligibility. It’s an important tool to reach a more even volume on each track. Why is that important, as long as everything is loud <em>enough</em>? Well, it’s not fun to listen to voices of constantly changing volume. This also goes into the topic of loudness standards, but more on that later. Just keep in mind that compression is about reducing the <strong>dynamic Range</strong>, the gap between the loudest and quietest part of a given signal.</p>
<p>It takes some mental pull-ups to wrap your head around the concept. I’ll give a short explanation a go. Compression somehow is associated with raising the volume of the signal. This is wrong – but only kind of. What compression does is leveling out the signal. The signal that exceeds the <strong>Threshold</strong> is knocked down by a certain amount. That amount is defined by the <strong>Ratio</strong>.
So far, the loudest parts of the audio got quieter. This means that the <strong>dynamic Range</strong> is reduced.<br />
After compression, typically some <strong>Makeup Gain</strong> is applied. The now “smoothed out” signal as a whole is turned up by a certain amount. This is where compression gets its false reputation from.</p>
<p>Beware: Compression is complicated enough in practice that I wouldn’t trust a random YouTube tutorial on the topic. I’m positive that everything a podcaster needs in this regard is <em>at least</em> touched on in <a href="http://thepodcastersstudio.com/tps088-compression-for-podcasts-with-randy-coppinger/">Episode 88 of <em>The Podcasters’ Studio</em> with Randy Coppinger</a> (<a href="https://overcast.fm/+I1X6mUeE">Overcast link</a>) that I mentioned before. Randy and Ray go into great detail on almost everything – <em>and</em> compression ;-)</p>
<p>Here’s my two cents on the main takeaways and everything I picked up on the topic since:</p>
<ul>
<li>
<p><strong>Threshold</strong>: As a starting point I suggest – some mixing pros might want to kill me, but I had good results – 6dB below a track’s total RMS level. Pow!<br />
Sidebar: RMS is a kind signal level reading. It’s a “more average” (that’s the technical term!) representation than a pure peak measurement.</p>
</li>
<li>
<p><strong>Ratio</strong>: Totally depends on your approach/taste/needs. You could start at 2.5 and take it from there.
For illustration purposes I’ll recount an example Randy gave in the episode: You can combine a rather low Threshold with a low Ratio. This results in Compression that will be active more of the time, but more gently. You could also combine a high Threshold with a high Ratio. This way, the compressor only reacts to the very loud parts of the signal, but then really slams it down.</p>
</li>
<li>
<p><strong>Attack/Release</strong>: For speech, auto settings tend to work well if your compressor plugin offers them. Other than that, a fast talker warrants a fast Attack (somewhere between 10-15ms). A typical Release might be somewhere between 20 and 80ms.</p>
</li>
<li>
<p><strong>Peak vs. RMS</strong>: If you can set whether the compressor reacts to peak or RMS (in Logic, that’s hidden in the “Side Chain” menu) the latter makes more sense. Except if for the one part of the track with excessive laughter, you want to use a compressor as a limiter and punch down those really high peaks. It’s hard to believe, but in mixing there are no hard and fast rules, and there are many philosophies to try and choose from.</p>
</li>
<li>
<p><strong>Makeup Gain</strong>: This setting is applied to the whole signal. What one could aim for here is the average reduction that is caused by the compressor itself. Why? I think because the headroom you gained by reducing the “overly” loud signal can now be “spent” on raising the whole signal.<br />
BUT you can also leave the makeup gain be and use a separate gain adjustment later.</p>
</li>
<li>
<p><strong>Knee</strong>: I fail to be able to hear the difference, but I picked up that a “soft” knee (higher value) is good for voice ¯\_(ツ)_/¯</p>
</li>
</ul>
<p class="pic"><img src="https://blog.timmschoof.com/images/comp.png" /><br />Logic Pro X's compressor plugin</p>
<h2 id="mixing-limiting">Mixing: Limiting<a name="Limiting"></a></h2>
<p>Limiting is basically the same as compression – only viewed from the “other side”. It’s a basically a compressor with an infinitely high ratio that slams the signal down once it reaches the set threshold.</p>
<p>While you can definitely use it in the mixing of each track, it’s more of a mastering tool I think. It’s a way of making sure that nothing exceeds 0dB.</p>
<p>Important to know: A lossless file, like a wav from your editing suite, will peak exactly where it “should”. If you limit to -1dB, the .wav will peak at -1dB. But nobody distributes their podcast as .wav, because that’d be ridiculous. Once you convert to a lossy format like mp3, there will be higher peaks. This has to do with the way data compression works: There are simply not as many data points, and by “jumping” from one to the next, there can be higher peaks than in the original file.</p>
<p>For this reason, once you talk limiting, it’s common to not use dB = decibel, but dBTP = decibel true peak. A dBTP value is arrived at by simply oversampling the signal, usually 4X. So, if you limit at the master stage, definitely set everything you can to dBTP.</p>
<p>How much higher the peaks will be depends on the amount of data compression (how lossy the final mp3 will be). You want just one value? Okay, -2dBTP are <a href="http://www.producenewmedia.com/loudness-compliance-summarization/">considered safe</a> for most applications. Don’t worry about the “loudness compliance” part for now, we’ll get to that <a href="#Loudness">almost immedialtely</a>.</p>
<p>Logic Pro X’s limiter plugin is… “a little weird”. In order not to produce a limited but also quieter signal, you have to “add” gain that <em>matches</em> the negative output level (<a href="https://twitter.com/produceNewMedia/status/963433047663276032">Thank you Paul!</a>).</p>
<p><strong>Release</strong> and <strong>Lookahead</strong> – Sorry, no insights from me here. I just use the settings that Paul <a href="http://www.producenewmedia.com/podcast-loudness-processing-workflow/">recommended</a> for mastering voice, never had a problem with those. Just remember that True Peak setting.</p>
<p>I use -9.5dBTP as output level here, which has to do with – you guessed it: loudness. Limiting is important enough for any production, but all of this becomes more important and interesting once we talk loudness standards!</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/lim.png" /><br />Logic Pro X's limiter plugin</p>
<h2 id="loudness-standards">Loudness Standards<a name="Loudness"></a></h2>
<p>Finally!<br />
From designers (who already tend to be very nerdy) I sometimes hear that typo nerds are <em>really</em> the crazy ones. I think the same applies to podcasters and then those who proud themselves on adhering to loudness standards.</p>
<p>Why is caring about loudness important at all, you ask? I’ll let Rob Byers of <a href="https://transom.org/">transom.org</a> speak:</p>
<blockquote>
<p>when folks listen to podcasts on a mobile device, they are likely on-the-go. Commuting via metro […], bus, walking, whatever. Or they are listening while doing the dishes or driving in the car. All of these scenarios will have loud background noise that competes with the audio.</p>
</blockquote>
<p>This fits with what I already talked about regarding compression and how it helps intelligibility. It is kind of a prerequisite for a coherent production that is loudness standard compliant.</p>
<p>Loudness <em>standard</em>? This is where the whole loudness thing goes a little further. The goal is not only that every podcast is “loud enough”, but that audio of the same kind is of <em>equal</em> loudness – across the board! No hastily reducing the volume because the next show’s intro makes your ears hurt – a real utopia, but we’re getting there. For this purpose, there needs to be a way to measure “loudness” as well as a standard everybody aims towards.</p>
<h3 id="metering">Metering<a name="Metering"></a></h3>
<p>There have alway been meters in audio, but for a long time there was no objective way to measure “loudness”. Peak amplitude or even RMS <em>are</em> measurements, but they are based upon the energy within a given signal. This doesn’t correlate to how loud a program is *perceived by a human ear. This is because our ear has a built-in equalizer, doesn’t have the same response to different frequency ranges (babies crying, evolution and stuff). The loudness standard gods came up with a measurement that simply takes this “ear EQ” into account. It’s called LUFS – Loudness Units Full Scale. I bet you’re already in love with it.</p>
<p>In the real world, this means that you can use a Loudness Meter in your DAW of choice, most have one by now. Note that using a Loudness Meter as you go is great for tweaking and experimenting. But generally, you’ll need to look at a file – or a track – as a whole, and listening through all the way just to get a loudness reading is a non-starter. “Offline measuring” is what you want, and the resulting reading is called the “integrated” loudness.</p>
<p>If you’re on a Mac, I can recommend <a href="https://github.com/audionuma/r128x">r128x</a>.<br />
The iZotope suites include a “Waveform Statistics” tool that I also like.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/stats.png" width="480" /><br />iZotope Waveform Stats</p>
<p>Another point of confusion might be that stereo and mono handle differently. There’s a 3dB offset: Stereo files should come out at -16 LUFS and mono files at -19 LUFS. <a href="http://www.producenewmedia.com/podcast-loudness-mono-vs-stereo-perception/">Paul explains it</a>.</p>
<h3 id="loudness-workflow">Loudness Workflow</h3>
<p>Now that we’ve covered the basics, let’s dive right in. I promise it’s less confusing once you’ve applied the workflow yourself. In a nutshell, the loudness workflow is this:</p>
<ul>
<li>Make each track equally loud</li>
<li>Limit for headroom</li>
<li>Add gain</li>
</ul>
<p>– with the last two steps being always the same, and done on the Master track.<br />
Why the limiting? Without it, the later addition of gain would almost certainly lead to several moments if clipped audio.</p>
<p>Here’s my loudness workflow which is built upon <a href="http://www.producenewmedia.com/podcast-loudness-processing-workflow/">Paul’s Workflow post</a>:</p>
<ul>
<li>Step 0: The prerequisite is a properly mixed file.
Depending on how you used to process your audio, some more compression might be necessary. This is so that the limiting we do later doesn’t knock the majority of your signal down, which would sound bad. Besides that, this just steps is just all the processing you’d normally do, for each individual track.</li>
<li>Step 1: Normalize each track to target level -27.0 LUFS.
This is a <strong>variable step</strong>, the offset is different for each track and each recording
<ul>
<li>Each track needs to be measured (LUFS, whoop!) in order to calculate the offset to -27 LUFS. Example: If one track measures out at -32.3 LUFS, then the offset is 5.3.</li>
<li>For each individual track, apply the offset by adding a gain effect as the last plugin.</li>
</ul>
</li>
<li>Step 2: Limit for headroom on the <em>Master Track</em>
<ul>
<li>Use a Limiter set to -9 dBTP (in terms of risking intersample peaking this works out for me because I release a 128kbps mp3), Lookahead 1.5ms and a Release of 150ms</li>
<li>(If you use Logic, be aware of the weirdness I pointed out when talking about limiting)</li>
</ul>
</li>
<li>Step 3: Apply 8dB of gain on the <em>Master Track</em></li>
<li>Result: After bouncing, you should end up with a file that measures at -16 LUFS (stereo)/ -19 (mono). It also should have a <strong>loudness range</strong> somewhere south of 8 LU. If you end up with a higher value, you might need to put some more dynamics compression on one or multiple tracks.</li>
</ul>
<p>For all the remaining why and how (Why -27LUFS, why three steps, why why whyyyy?) – here are my top 5 resources:</p>
<ul>
<li>A good starting point: Two <em>The Podcasters’ Studio</em> Episodes, an interview with Georg Holzmann
<ul>
<li><a href="http://thepodcastersstudio.com/tps085-auphonic-and-loudness-standards-with-georg-holzmann/">Epiosde 85</a> (<a href="https://overcast.fm/+I1WpcZkI">Overcast link</a>)</li>
<li><a href="http://thepodcastersstudio.com/tps086-loudness-normalization-with-georg-from-auphonic-part-2/">Episode 86</a> (<a href="https://overcast.fm/+I1XI6BbE">Overcast link</a>)</li>
</ul>
</li>
<li><a href="http://www.producenewmedia.com/podcast-loudness-processing-workflow/">Paul’s Workflow Post</a></li>
<li>transom.org: <a href="https://transom.org/2015/podcasting-basics-part-3-audio-levels-and-processing/">Audio Levels and Processing</a></li>
<li>transom.org: <a href="https://transom.org/2016/podcasting-basics-part-5-loudness-podcasts-vs-radio/">Loudness for Podcasts vs. Radio</a></li>
<li><a href="http://www.producenewmedia.com/loudness-compliance-summarization/">Paul’s Loudness Compliance Summarization</a> (you graduate loudness school when you can explain everything on this chart)</li>
</ul>
<h2 id="exporting">Exporting<a name="Exporting"></a></h2>
<p>You’ve bounced your project file to a nice .wav, all the work is done. All of it? Not quite.</p>
<h3 id="quality-1">Quality</h3>
<p>The question that every podcaster faces at some point: How many kbps are enough for my final mp3?</p>
<p>For a long time, I’ve been in the 64kbps mono camp. “Good enough for voice”, sure! Until I went down a rabbit hole of misguided EQ tweaks that got me nowhere. I had arrived at a sound in Logic that I was very happy with. Only the mp3 had something to it – not quite compression artifacts, but also nothing that was existing in the wav file. It drove me crazy. I then finally did <em>another</em> comparison of 64 vs. 128kbps and heard a difference. It may be some aspect of my voice that I’m really (overly) allergic to – however, I found the sound more pleasing, end of story.</p>
<p>Try 64kbps, and if you’re happy with it: great. If not, just go for 96 or 128kbps. Audio is packed into tiny files. People are out and about streaming 4k video, for F’s sake!</p>
<h3 id="forecast-finalizing-chapters-tagging">Forecast: Finalizing, Chapters, Tagging</h3>
<p>I always found including chapters a hassle. Until all-things-podcast-Marco released <a href="https://overcast.fm/forecast">Forecast</a> publicly. Try it, it’s fun! Forecast also makes all the tagging and naming a breeze, derives episode names from your folder/filename structure which is very clever and works great.</p>
<p>A word regarding chapters: mp3 chapters seem to only have a “resolution” of full seconds. If you’re really particular about people jumping right to the middle of a word, consider this while placing markers – or move the audio around a bit. Me and my cohost are fast talkers, a little more space between sentences generally is a good thing anyway.<br />
Forecast also allows for very convenient addition of <a href="https://twitter.com/tschoof/status/954297979111968768/photo/1">chapter images</a>.</p>
<p>I can’t come up with a reason why you shouldn’t use Forecast for finalizing podcast projects.</p>
<h3 id="dithering-or-how-to-mp3">Dithering, or “how to mp3”<a name="dither"></a></h3>
<p>Even the very final step, podcasters argue over. Just make your project an mp3, right? Not so fast! Enter the <a href="http://www.producenewmedia.com/bit-depth-and-dither/">dithering problem complex</a>! If I’m going from 24 to 16 bit, something needs to happen to that extra information that doesn’t “fit” in the 16 bit file. If not handled properly, the distributed 16 bit file can sound (probably inaudbly, though) worse than one recorded at 16 bit in the first place.</p>
<p>But!</p>
<p>If you are, as any sane podcaster is, distributing an mp3, I’m not so sure. The LAME encoder for example internally works at 32 bit. So I can’t fathom why you’d need to go “down” to a 16bit wav before feeding the file to LAME. Just feed the encoder the best file you have and let it worry about the rest.<br />
That’s Marco’s theory and I <a href="https://www.audiorecording.me/dithering-and-sample-rate-conversion-before-mp3-encoding-complete-study.html/2">found</a> <a href="https://www.kvraudio.com/forum/viewtopic.php?f=62&t=361172&sid=857f666ed3922fb9b3c3182cbd43e31f&start=15">sources</a> that reinforce this stance. In short, this is what I’d recommend:</p>
<ul>
<li>Record a 16 or 24bit file</li>
<li>Do your editing, mixing etc.</li>
<li>Export to whatever quality level you’ve recorded at</li>
<li>Throw the file over to Forecast (which uses the LAME mp3 encoder)</li>
</ul>
<p></p>
<h3 id="audiogram">Audiogram</h3>
<p>Not quite a tool <em>exporting</em>, but promoting: The folks at NPR released an <a href="https://github.com/nprapps/npr-audiogram">Audiogram</a> tool. With this, you can create a video file out of a snippet. Great for <a href="https://twitter.com/sonne_altona/status/988667009855520768/video/1">promoting an upcoming episode</a> on social media.</p>
<h2 id="hosting">Hosting<a name="Hosting"></a></h2>
<p>Whatever you do, don’t host on SoundCloud. Its future is uncertain, they hide your mp3s from you, and even with a paid plan you haver very little control over your feed/files.</p>
<p>I use <a href="http://simplecast.com/">Simplecast</a>. It’s pretty solid I’d say. They could make the included website a little more customizable and finally come out with more detailed statistics. But overall im pretty happy. Their customer service is responsive, overall they strike a good balance between customizablity and usability, and they can <a href="https://open.spotify.com/show/4TLAmx64foh62SJNBwqxQ2">put you on Spotify</a>! They also offer a snazzy embeddable player that I’m using <a href="#MyPodcast">later in this article</a>.</p>
<p>I bet you also don’t go wrong with choosing Dan Benjamin’s (of <a href="http://5by5.tv/">5by5</a> fame) <a href="https://fireside.fm/">Fireside.fm</a>.</p>
<p>The omnipresent (as podcast advertiser) <a href="https://www.squarespace.com/">Squarespace</a> also does podcast hosting – but you have to twist its arm a little bit to get there. I remember quite some forum-hopping and hassle to have everything just-so for <a href="http://www.ungeheuerlich.org/">my old podcast</a>.<br />
Also: the last time I observed a friend set up her website on Squarespace, it felt a little like… I’m gonna say it: Wordpress (if you don’t know Wordpress: It’s powerful, but boy is it ugly and hard to use).</p>
<p>There’s also <a href="https://www.libsyn.com/">libsyn</a> – big podcasts use it, I’ve heard only good things about it. But I have no idea how well suited it is for smaller endeavours.</p>
<h2 id="live-signal-processing-with-a-dbx-286">Live signal processing with a dbx 286<a name="dbx"></a></h2>
<p>You know what’s fun? Taking everything you’ve learned and throwing it out the window.<br />
Not really of course, because that knowledge still comes in handy. I felt a little weird though, “transferring” some/most of the stuff I did in Logic to a box that I plugged in between microphone and Interface. Sorry, I’m getting ahead of myself!</p>
<p>If you feel fancy or adventurous or both, you can do what radio stations or streamers do: Do most of the post processing… well, during recording. There’s a risk: If you screw it up, the recording is screwed up. Philosophically, I’d be totally opposed to this. But the upside is a reduced amount of work after the fact. My post processing now consists of noise reduction, loudness optimization and some EQ that I’m working to get rid of – That’s it!</p>
<p>I went down this path mainly because I was looking for a higher-quality preamp, but now I’m in love with recording a nicely compressed signal.</p>
<p>If this sounds like it might be for you: You might want to take a look at a <a href="https://dbxpro.com/en/products/286s">dbx 286</a>. It’s a dedicated microphone preamp that also does processing – very well suited for podcasters. It does compression, de-essing, some EQ, and some gating. Sure, 170€ per channel is nothing to sneeze at for a hobbyist. But you can get good deals on a used predecessor model (I got one for 30 bucks. 30!).</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_dbx_1.jpg" /><br />My two dbx 286 – mounted on two 3 HE rack parts and metal brackets from the hardware store</p>
<p>A word on the different versions: The dbx 286A seems to be almost indistinguishable from the current model, the 286S. The 286 (no A) has an external power supply. There’s also two versions of the 286A: with “Project 1” labeling and without. The Project 1 version seems to be older. If anything though, it produces less hiss on the highest gain settings than my “regular” 286A.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_dbx_2.jpg" /><br />two dbx 286A, "Project 1" and... "regular" – And yes, I could've cleaned them for the photo</p>
<p>Some resources:</p>
<ul>
<li><a href="http://www.producenewmedia.com/mic-preamp-level-and-gain-staging/">Paul on Gain Staging</a></li>
<li><a href="http://www.producenewmedia.com/dbx-286s-beyond-the-basics/">Paul also praising the dbx 286</a></li>
</ul>
<h2 id="more-resources">More Resources<a name="Resources"></a></h2>
<p>Even with all the sections, there are still “miscellaneous” resources I want to mention:</p>
<ul>
<li>Episode 58 & 59 of <em>The Podcasters’ Studio</em> – Ray talks post production in general with Joe Glider
<ul>
<li><a href="http://thepodcastersstudio.com/tps-ep-058-audio-post-production-with-joe-gilder-part-1/">Episode 58</a> (<a href="https://overcast.fm/+I1WJap34">Overcast Link</a>)</li>
<li><a href="http://thepodcastersstudio.com/tps-ep-059-part-two-audio-post-production-for-podcasters-with-joe-gilder/">Episode 59</a> (<a href="https://overcast.fm/+I1VLYYuk">Overcast Link</a>)</li>
</ul>
</li>
<li><a href="http://thepodcastersstudio.com/tps101-podcasting-101-how-to-start-a-podcast/">Episode 101</a> of <em>The Podcasters’ Studio</em> – A guide on how to start a Podcast. Okay, you’re beyond that if you’ve made it here, but it’s a good reminder of what the basics are.</li>
<li>I’m sorry to link my own post here. But if you’ve made it here you also might be interested in how podcasts could be made more accessible for listeners: <a href="https://blog.timmschoof.com/2015/02/15/what-it-takes-to-listen-to-a-podcast/">What it Takes to Listen to a Podcast</a></li>
</ul>
<h2 id="thanks">Thanks<a name="Thanks"></a></h2>
<p>For many many small tips I received, or general contributios that motivated hobby/professional podcasters can benefit from a lot, I want to take this opportunity and thank these podcast aficionados/ podcasters/sound processing experts:</p>
<ul>
<li>
<p><a href="https://sixcolors.com/">Jason Snell</a> – I’ve bugged Jason via twitter several times, and he was always very nice and helpful. Before Forecast was public, I was able to hack together a script that generated podcast mp3s – something that only seamed feasible because I tweaked a script of Jason’s. If you do podcasting and hit a problem Jason has written about – maybe podcasting on iOS? – you’re definitely in luck!<br />
All of Jason’s writing <a href="https://sixcolors.com/topic/podcasting/">on podcasting</a>.</p>
</li>
<li>
<p><a href="http://thepodcastersstudio.com/">Ray Ortega</a> – I have learned <em>so much</em> from the episodes of <em>The Podcasters’ Studio</em> I linked to in the sections above I almost can’t believe it. They’re an amazing start in their respective topics for sure. But they’re so dense that you can come back when you have the basics down and learn twice as much. Just an incredible resource.<br />
In addition to that, Ray has always been very nice and helpful when I bugged him via twitter.</p>
</li>
<li>
<p><a href="http://www.producenewmedia.com/">Paul Figgiani</a> – I hit Paul’s website while wandering the desert of loudness compliance confusion, and it was everything I had been looking for. He also tweets about new plugins or general developments in podcasting. And when I hit the enigma-like behavior of Logic’s limiter, he took the time to reproduce my setup and <a href="https://twitter.com/produceNewMedia/status/963433047663276032">present me with the solution</a>.</p>
</li>
<li>
<p><a href="https://marco.org/">Marco</a> – Well, Marco’s <a href="https://marco.org/podcasting-microphones">Podcast microphones Review</a> is legendary by now. If this wasn’t enough, with the release of <a href="https://overcast.fm/forecast">Forecast</a> he gave all podcasters a really really amazing present. I almost forgot, he also is the developer of <a href="https://overcast.fm/">Overcast</a>, many podcasters’ favorite iOS podcast app.</p>
</li>
</ul>
<p>There are even more fellow podcasters that I had small, very helpful interactions with. I think it’s great that podcasting is to a large extent made up of people who want to help each other.<br />
With this article, I try and do my part in making podcasting more accessible. Or at least in demistifying a higher production quality a bit.</p>
<p class="pic"><img src="https://blog.timmschoof.com/images/pc_003.jpg" /><br /></p>
<h2 id="my-podcast">My Podcast<a name="MyPodcast"></a></h2>
<p>If you want to hear what all of this know-how sounds like, give my podcast a listen (it’s German though): <a href="http://sonnealtona.de/">Die Sonne über Altona</a> – as I write this, there are 42 episodes released. Naturally, I think the latest episode sounds the best.</p>
<iframe frameborder="0" height="200px" scrolling="no" seamless="" src="https://embed.simplecast.com/00655738?color=3d3d3d" width="100%"></iframe>
<p>If you’re wondering what our first episode sounded like, <a href="https://blog.timmschoof.com/2017/06/08/sonne-altona/">look no further</a>. Quite a step up! I’ll even share the <a href="http://www.ungeheuerlich.org/episoden/1-craft-beer">first podcas episode I ever did</a>. One track, recorded on… an iPhone 4S I think.</p>
<p>Again, if you have any input or questions, please don’t hesitate and <a href="http://timmschoof.com/">contact me</a>.</p>
<p>Thank you so much for reading/skimming/being interested :) Cheers!</p>
http://blog.timmschoof.com//2017/08/09/customer-serviceCustomer Service2017-08-09T18:08:00+00:002017-08-09T18:08:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>I recently got my PlayStation 4 Pro swapped out for a new unit <em>and</em> a second, identical run of photo prints, both of which weren’t necessary at all. In both cases, I was in contact with customer service, inquiring about a problem.</p>
<p>The PS4 seemed to produce faulty shadow graphics, the run of prints had a few pictures with bad color gradients. Both times I had simply asked customer service the question “<em>Is this normal?</em>”. Both times it seems the scripted responses or actions didn’t allow for actually pursuing an answer to that question. Instead, I was asked if I had tried rebooting the PS4, and was promptly sent that second set of prints – again with faulty color gradients. After a ton of persuasion, I got the PS4 switched out for another unit, which… <em>drumroll</em> showed the same symptoms. Still <a href="https://twitter.com/tschoof/status/851830545008951296">can’t believe this is where technology is at regarding shadow graphics</a> – but that is beside the point.</p>
<p>I get it: Customer service, customer support is expensive. You need scripted responses and standard procedures to manage the volume of requestst that hit your infrastructure every second. But at the point where that system leads to totally unnecessary costs <em>and</em> a crappy experience for the customer, you need to acknowledge that it’s just not working.</p>
<h2 id="more-like-new-lolker-amirite">More like <em>New LOLker</em>, amirite?</h2>
<p>Another example which shows that taking customer service requests seriously can lead to a much, much better product – or, at least, a less bad product:</p>
<p>I couldn’t log in to my newyorker.com account, got a <em>wrong password</em> message. I guess you’ve been there, a perfect setup for a <em>great</em> customer service conversation if there’s ever been one. Thing was, I didn’t enter the wrong password, I entered exactly the same the form on the website had previously accepted as a valid one. But since I’m such a <a href="https://1password.com/">poweruser</a>, I chose a password that was, in fact, “<a href="https://www.xkcd.com/936/">too long</a>”, at least for the part of the website that <em>accepts</em> passwords. It would’ve been one of the great mysteries of technology, because the customer service got me nowhere. Huge shoutout to twitter user <a href="https://twitter.com/tschoof/status/854786468958621696">@zacwest for pointing the error out in a googleable manner</a>.</p>
<p>When I pointed, eh, <em>wanted to</em> point this out in my customer service chat, <a href="https://twitter.com/tschoof/status/854786813344587776">this happened</a>:</p>
<p class="pic"><a href="https://blog.timmschoof.com/images/newyorker_cs.jpg"><img src="https://blog.timmschoof.com/images/newyorker_cs.jpg" /></a></p>
<p><br />
This was of course after the “<em>let me reset your password for you</em>“-dance which I had to go through <em>again</em> despite having explained that I had already done so.</p>
<p>With somethiong like this, no smug realism-cutthroat-capitalist-this-is-just-how-it-is attitude can justify the abysmal quality of the vast majority of customer service experiences. Security (and that’s a big one!) aside, it simply <em>has</em> to be more expensive to deal with all the requests this bug generates than to fix the underlying problem. But seemingly, again, “inquiring about/fixing the underlying problem” is not a common action customer service representatives are allowed or motivated to undertake.</p>
<p>To be clear: I am not blaming this on the individual represantatives I was in contact with. Most likely the customer service system simply rewards “successful requests” over a real, sustainable analysis of the problem at hand. A symptom of bad management/environment, not individual bad performances.</p>
<h2 id="surprise-and-delight">Surprise and Delight</h2>
<p>I don’t want to be purely negative, and rather emphasize my point by giving a positive counterexample. A customer service <em>issue</em> is always also an <em>opportunity</em> for delighting the customer, or at least making them feel better than if the problem hadn’t existed in the first place. Theoretically, no issues with the product would be preferable, but I’d say since we’re all human, a successful interaction has the potential to make us feel special.</p>
<p>A good friend of mine recently was dissatisfied with soup she bought in her lunch break. She wrote in, and what she got back was a) an apology b) an explanation and c) a shipment of free soups, <em>all</em> of which are very important.</p>
<p>The free soup is really the extra mile, but what is the price of some soup compared to the value of a real fan of your brand?</p>
http://blog.timmschoof.com//2017/07/23/wireless-everythingWireless Everything2017-07-23T16:00:00+00:002017-07-23T16:00:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>I’m kind of a technology believer, but sometimes even I don’t anticipate <em>how</em> awesome something is gonna be.</p>
<p>Wirelessness is one of those things. I long had the idea of getting Bluetooth headphones, but somehow didn’t get around to it. The early in-ear models with their neck bands always seemed like a flawed design to me, and the really big models are kind of pointless anyway. I liked on-ear models better, despite their bulkiness, because that allowed for them to have decent battery life and enough space for some controls.</p>
<p>But of course the AirPods are even better. First, or at least early in the weirdly named “<a href="http://thewirecutter.com/reviews/best-true-wireless-headphones/">true wireless</a>” category, they really change the whole experience: You forget you’re wearing them because they’re so light. Me spending <em>several seconds</em> untangling my old headphones’ cables, every time I put them in seems outright barbaric now.</p>
<p>A few days ago, I tried out my old Shure-inears again. Their sound quality is superior, no question. But even the short travel between sofa and bed, handling and arrangig a few things seemed so ridiculous with a cable hanging <em>from my ears</em>. I mean, get a grip!</p>
<p>This really is the epitome of how we relate to new things or possibilities. Thinking about it some time ago, I would have been like: “<em>No cable, big deal</em>”. Only now I recognize all the friction that comes with a wired design. Nothing like that to make you realize that while technology is good and all, it’s the best when it gets out of the way.</p>
<p>Anyway, now I’m all wireless. People on the street look at me, and then keep looking a bit longer while realizing my headphones are wireless – or realizing they think I look weird with these stubs in my ears. I smile at them, letting them know that yes, my life is <em>that</em> awesome. And theirs can be, too.</p>
http://blog.timmschoof.com//2017/06/08/sonne-altonaMy new Podcast: Die Sonne über Altona2017-06-08T08:00:00+00:002017-06-08T08:00:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p>I’m excited to finally share my new Podcast with you! <a href="http://sonnealtona.de/"><strong><em>Die Sonne über Altona</em></strong></a> is co-hosted by my friend <a href="https://twitter.com/Oladi_Naht">Alina</a> and me. We talk (in German) about everything that’s near and dear to our hearts: pop culture, politics & society, design, art and everything in between.</p>
<p>The first episode is about a very German topic as well: traffic lights signal transmitters (and of course: their design flaws). It’s pretty good.</p>
<p>We’re having so much fun doing this project, and would love for you to give us a chance in your weekly podcast rotation! Thanks.</p>
<p>Special thanks to <a href="http://www.margaretelaue.de/">Maggy</a>, who came up with our beautiful logo!</p>
<p>Getting into podcasts is hard, <a href="https://blog.timmschoof.com/2015/02/15/what-it-takes-to-listen-to-a-podcast/">as we all know</a>, so I’d appreciate any help in spreading the word. These are all the places for sharing and liking and following:<br />
<a href="http://sonnealtona.de">sonnealtona.de</a> (our website is pretty!) – <a href="https://rss.simplecast.com/podcasts/2627/rss">RSS</a> – <a href="https://itunes.apple.com/de/podcast/die-sonne-über-altona/id1244439233">Apple Podcasts</a> – <a href="https://overcast.fm/itunes1244439233/die-sonne-ber-altona">Overcast</a> – <a href="http://pca.st/LC5Q">Pocket Casts</a> – <a href="https://www.facebook.com/sonnealtona">Facebook</a> – <a href="https://twitter.com/sonne_altona">Twitter</a></p>
<p>And here’s the first episode:</p>
<iframe frameborder="0" height="200px" scrolling="no" seamless="" src="https://embed.simplecast.com/a171172b?color=3d3d3d" width="100%"></iframe>
http://blog.timmschoof.com//2017/02/21/pumping-toxic-masculinityPumping Toxic Masculinity2017-02-21T08:00:00+00:002017-02-21T08:00:00+00:00Timm Schoofhttp://blog.timmschoof.com/<p><strong><em>Pumping Iron</em></strong> is a 1977 documentary about bodybuilding, or rather about Arnold Schwarzenegger winning his sixth Mr. Olympia title. At the time, the documentary boosted Schwarzenegger’s image. I find that fascinating, as I was rather repulsed when I recently watched it. Albeit not by the insight into the world of bodybuilding, but how openly Schwarzenegger talks, even brags about how he manipulates competitors.</p>
<p>Take a look at Schwarzenegger having breakfast with Lou Ferrigno and his parents. Lou’s father was also his trainer.</p>
<div class="videoWrapper-16-9"><iframe width="1280" height="960" src="http://www.youtube-nocookie.com/embed/PNiJSR07w5w" frameborder="0" allowfullscreen=""></iframe></div>
<p>Schwarzenegger also explicitly says that he’d give Franco Columbu, a competitor he was friends with, wrong advice to have an advantage. Call me naive, but I think these aren’t very good traits, for nobody.</p>
<p>In order not to be vulnerable to any kind of negative impact himself, Schwarzenegger basically shut down his emotions, he explains. To the point that when his father died two months before a competition, Schwarzenegger wasn’t affected by it. Overall, Schwarzenegger’s behavior might be a perfect example of <a href="https://en.wikipedia.org/wiki/Toxic_masculinity">toxic masculinity</a>: Be dominant, be aggressive to win at whatever cost, don’t show feelings.</p>
<p>Schwarzenegger says that he sees these methods as “<em>tools that are available, so you might as well use them</em>”. I’d argue that whoever sees targeted attacks on an opponent’s psyche as “just another tool” doesn’t compete in bodybuilding, but in ruthlessness.</p>
<p class="gif"><img src="https://blog.timmschoof.com/images/the-terminator-toy-crush.gif" width="400" /></p>
<p>I know there’s smack talk in sports, but that’s a different thing. With yelling and some insult, while not being very classy, there’s a clear understanding of rivalry and what you’re competing to accomplish. Actively branching out and undermining your competitor’s psyche instead is pretty close to plain old sabotage. Mmmh, sweet success! And I thought being a good winner was a plain and simple concept, even easier than being a good loser.</p>
<p>But come on now, this is just me, unfairly judging something that happened 40 years ago with today’s morality standards, right? Well, Schwarzenegger’s behavior mostly seems to be <a href="http://ignorelimits.com/psychological-warfare/">praised</a> for breaking new ground in <a href="http://www.businessinsider.com/arnold-schwarzeneggers-psychological-warfare-2015-2?IR=T">“psychological warfare”</a>, instead of being identified as what it is: <a href="http://www.thedailybeast.com/articles/2011/05/24/arnold-schwarzenegger-8-crazy-scenes-from-pumping-iron-his-1977-documentary.html">manipulative and shallow</a>.</p>
<p>I am not against being good at something, I am not at all against fair competition. Ferrigno did in fact win the title the following year and I’m not mourning his loss against Schwarzenegger. This 40-year old documentary simply shows a great example of someone who puts success above all else. We all know where this led Schwarzenegger: Career in acting and politics, belonging to the highest circles of US society. Realizing that the toll this took is a lack of humanity towards others as well as himself, casts this success in a very different light. With all of this out there and Schwarzenegger even bragging about his methods, without being criticized for it, one is hard-pressed to defend the mechanisms by which society awards approval. Also: Yay documentaries!</p>
<p><a href="https://www.netflix.com/search?q=pumping%20iron"><strong><em>Pumping Iron</em></strong> is currently on Netflix</a>.</p>
http://blog.timmschoof.com//2015/12/29/choosing-a-camera-in-2015Choosing a Camera in 20152015-12-29T11:28:40+00:002015-12-29T11:28:40+00:00Timm Schoofhttp://blog.timmschoof.com/<p>So, <a href="https://blog.timmschoof.com/2015/09/28/getting-into-photography-in-2015/">in my last post I wrote about photography in general</a>, and specifically about what helped me along the very first steps. In this one, I want to share the considerations and decisions that led to me going for the <strong>Olympus OM-D E-M10</strong> (can you get <em>all</em> the hyphens right?), a mirrorless camera. Yes, just before 2015 runs out. Since I try and do sustainable blogging though, the broad strokes should stay relevant until the camera landscape changes dramatically.</p>
<p>I’m gonna try and give a quick and reasonably simplified overview of the current landscape in cameras first, how they differ, then describe what I find important interacting with a camera, how I almost setteld for a used DSLR, but then opted for a part of the mirrorless future. At the end I can’t help it and give a few tips on using the E-M10.</p>
<p class="pic"><a href="https://blog.timmschoof.com/images/EM10-beer.jpg"><img src="https://blog.timmschoof.com/images/EM10-beer.jpg" /></a><br />Olympus E-M10, with the Olympus 45mm f1.8 lens – and randomly placed former-hipster-now-mainstream beer</p>
<h2 id="kinds-of-cameras">Kinds of Cameras</h2>
<p>A little primer on the current field of products: There are <a href="https://en.wikipedia.org/wiki/Digital_single-lens_reflex_camera"><strong>DSLRs</strong></a>, which have an optical viewfinder, and because of that (I think), a mirror and a prism. This makes them big and bulky, and kinda heavy. But DSLRs are what <a href="https://instagram.com/p/7ilZuTBGX5/">professionals still swear by</a>.</p>
<p>DSLRs are the ones <em>with</em> the mirror, so another kind of camera is called <a href="https://en.wikipedia.org/wiki/Mirrorless_interchangeable-lens_camera"><strong>mirrorless</strong></a>, of course. This by itself has all the effects on how they’re built and what kind of properties they have. In short: They can be smaller and lighter than a DSLR, but they cannot have an optical viewfinder, and most of them have worse autofocus systems. But technology advances, and mirrorless bodies catch up quickly. They have electronic viewfinders (EVFs) that only get better with time. Anyway, DSLR vs mirrorless is the kind of discussion with sometimes religious undertones in the camera geek community right now.</p>
<p>The Wikipedia definition also throws an “<em>interchangeable lens</em>” property in there, and that distinguishes it from the next kind of camera. Before we get to that, I have to say that there’s not one mirrorless “standard”, since there are a few manufacturers doing their own thing. This isn’t the biggest difference between them, but the lens mount is ultimately the reason for the incompatibility of those systems. So by picking a specific manufacturer, you decide on which range of lenses you want to be able to chose from (just as with DSLRs between Nikon and Canon). There is the <strong>Micro Four Thirds system</strong> (or MFT, or M43, or µFT) championed by Panasonic and Olympus; there are <strong>Fuji’s mirrorless cameras</strong> (X-mount lenses), and there’s <strong>Sony’s ɑ-series</strong> (E-mount lenses).</p>
<p>Anyway, the next kind of camera is, of course, a <a href="https://en.wikipedia.org/wiki/Point-and-shoot_camera"><strong>compact</strong> or <strong>point-and-shoot camera</strong></a>. These are “mirrorless” as well, but have a fixed lens. While you’re just getting ready to be snobby and disregard these cameras (I certainly would), let me tell you that “compact” – actual surprise! – doesn’t necessarily mean “crappy”. To pick a popular current example: The Fuji X100 models are well-reviewed, loved as a fun camera by seasoned photographers, have a large sensor (more on that later) – and cost over 1000€/$, if that convinces you.</p>
<p class="pic"><a href="https://www.flickr.com/photos/janitors/16130683260"><img src="https://blog.timmschoof.com/images/X100T.jpg" /></a><br />Fuji X100T – photo by Kārlis Dambrāns, used under <a href="https://creativecommons.org/licenses/by/2.0/">CC-BY</a></p>
<h2 id="kinds-of-sensors">Kinds of Sensors</h2>
<p>When you’re talking cameras and lenses, you almost always are also talking <a href="https://en.wikipedia.org/wiki/Image_sensor_format">sensor sizes</a>, directly or indirectly. So, why does it matter?</p>
<p>The sensor of a digital camera is what film is in a film camera. And sensors come in different sizes. 35mm film is referred to as “full frame” (“Vollformat”, or perplexingly, “Kleinbild” in German, which translates to “small picture”). High-end DSLRs and some Sony mirrorless cameras have <strong>full frame</strong> sensors. Everything smaller than full frame is also referred to as <strong>a</strong> “crop sensor”. A very widespread sensor size is <strong>APS-C</strong>. Fuji and Sony use APS-C for some of their mirrorless cameras, and it’s also used in most consumer/prosumer-DSLRs. Relevant to my angle here is also the even smaller <strong>four thirds</strong> size used in the MFT systems mentioned above. Here, the crop factor is 2x, by the way. So if a lens is a 25mm, it has the properties of a 50mm lens on a full frame (35mm – confused yet?) sensor. People talk about 50mm being the “35mm equivalent” of that lens.</p>
<p>These cover the most relevant I think. And just if you were wondering (I was), there’s also something bigger, <a href="https://en.wikipedia.org/wiki/Medium_format_(film)">medium format</a>. You like graphics? Here’s a graphic:</p>
<p class="pic"><a href=""><img src="https://blog.timmschoof.com/images/sensor_sizes_overlaid_inside_2014.png" /></a><br />Sensor Sizes <a href="https://commons.wikimedia.org/wiki/File:Sensor_sizes_overlaid_inside_2014.png">by MarcusGR, used under CC BY-SA</a></p>
<p>The bigger the sensor, the more information it can capture (d’uh). But there are two different ways to go about that: A sensor bigger than another can either be filled with <em>more pixels</em>, or with the same number of, but <em>bigger pixels</em>. More pixels allow for more detail (a.k.a. resolution), bigger pixels allow for more light to be caught (good low light performance, meaning less noise, therefore higher ISO allowing for shorter exposure possible). Those pixels, of course, are the <strong>megapixels</strong> everyone was talking about a few years ago.</p>
<h2 id="mix-and-match">Mix and Match</h2>
<p>While there are differences in picture quality, any modern camera’s is sufficient for almost any purpose (If you believe people who know way more about this than me). That makes <strong>other factors</strong> more important: size and weight, number of dials, lens selection, kind of viewfinder, touchscreen, accentuating screen, Wifi?</p>
<p>Do you want to tap on a touchscreen to focus and take the picture? Then you don’t need an EVF. Do you “just” want to take nicer pictures than your phone allows for? Then you don’t necessarily need any fancy dials and function buttons and probably want the body to be as slim as possible. Do you want nice jpegs out of camera (no processing of the RAW in Lightroom)? People love the Fujis for that. If you just want want want a full frame sensor, you are either looking at a pro DSLR or the more expensive Sony cameras.</p>
<p>Here’s an example: The Olympus E-PL7 and the E-M10 are almost identical looking at the specs. The PL has one manual control dial instead of two, some buttons, a little extra grip and the EVF, but comes with a screen that folds down 180 degrees (sick selfie capability!). These two cameras will take the exact identical picture, though. It’s literally “only” the described differences in the body. Here’s a <a href="http://cameradecision.com/compare-size/Olympus-PEN-E-PL7-vs-Olympus-OM-D-E-M10">side by side view</a>, and here is the E-PL7 in all its glory:</p>
<p class="pic"><a href="https://commons.wikimedia.org/wiki/File:Olympus_E-PL7.jpg"><img src="https://blog.timmschoof.com/images/pl7.jpg" /></a><br />Olympus E-PL7 – photo by PetarM, used under <a href="http://creativecommons.org/licenses/by-sa/4.0/">CC-BY-SA</a></p>
<h2 id="choosing-the-e-m10">Choosing the E-M10</h2>
<p>What helped me figure out that I wanted an E-M10 probably was handling my friend’s 40D. Not that I didn’t like it, actually the opposite: I loved it, especially that it felt like an analog tool, almost like a hammer (yeah, with dials, but still).</p>
<p class="pic"><a href="https://www.flickr.com/photos/cubmundo/6376129973"><img src="https://blog.timmschoof.com/images/40D.jpg" /></a><br />Canon 40D – photo by cubmundo, used under <a href="https://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a></p>
<p>I almost settled for a used 60D, which would’ve cost around 400€ and has some nice improvements over the 40D: More resolution without worse low light performance, more focus points, SD instead of CF cards. That sounded nice to me.</p>
<p>Then, another friend told me she wouldn’t recommend anybody without an investment in Canon or Nikon lenses to get in the DSLR market today. With mirrorless, picture quality is well beyond “good enough”, and there’s a big and affordable lens collection. I remembered Shawn Blanc’s <a href="https://shawnblanc.net/2013/04/camera-review-olympus-e-pl5/">E-PL5 review</a> from one of the times when I was wondering if a “proper” camera would be something for me. Reading his <a href="http://toolsandtoys.net/reviews/the-olympus-om-d-e-m10/">review of the E-M10</a> then gave me an idea of what it offered that a slimmer camera like an E-PL could leave me wanting.</p>
<p>Anyway, all the advantages of mirrorless cameras kinda came back to me. If only there was one that also gave me what I loved about the 40D! Well, I had the opportunity to fool around with an E-M10 for a few days. By the way: It is the “smallest” in Olympus’ OM-D line, after the E-M5 and E-M1.</p>
<p class="pic"><a href="https://blog.timmschoof.com/images/EM10-table-kit.jpg"><img src="https://blog.timmschoof.com/images/EM10-table-kit.jpg" /></a><br />Olympus E-M10, with the Olympus 14-24mm kit lens – compact, but I wanted to go with prime lenses instead of zooms in the beginning</p>
<p><strong>Electronic Viewfinder</strong><br />
What had turned me off the whole mirrorless thing before was the concept of an EVF. I had tried some at a big box reseller, and found them absolutely appalling. Probably very brigh, neon-lit surroundings and misconfiguration were to blame, because I didn’t hate the E-M10’s when I first tried it in real world conditions. I also wasn’t blown away, but it only got better with time.</p>
<p>The lag is not a problem for me at all. The resolution could be better I guess, but it is alright. And then, there are all the advantages that come with it being digital: Different grids to choose from, exposure correction affecting what you see, and looking at pictures you’ve taken or even the menu in the EVF as well – without taking the camera away from your eye. Very handy sometimes.</p>
<p><strong>Dials</strong><br />
I had fallen in love with the big manual control dial on the 40D’s back. I used it for exposure compensation as well as scrolling through pictures, and knew that I didn’t want to miss this kind of control in my own camera. And there needed to be two of these, another one for adjusting the aperture or shutter speed, depending on the mode I’m shooting in.</p>
<p>All of this, I found in the E-M10 as well. As mentioned, it has two dials. In combination with a function button, these also allow for easy changing of the ISO and white balance without digging into a menu.</p>
<p><strong>AF Point Selection</strong><br />
Another thing that I liked about the 40D was the little knob-thingy above the control dial for selecting the focus point. It only has 9 of them, so with one push in the right direction, the right focus point was selected.</p>
<p>The E-M10 has 81 focus points. Guess what doesn’t work with 81 focus points: Pushing once to select the right one. So you have to use the 4-way dial on the camera’s back, which for me is the weakest spot in day to day use. Pressing the dial ten or twelve times to get to the right point for the specific shot just isn’t fun. But it is possible without taking the camera from your eye, so it’s acceptable for me. The E-M10 II (released in August this year) goes at this with a touchscreen that acts as a trackpad (“<a href="http://www.dpreview.com/reviews/bang-for-the-buck-olympus-om-d-e-m10-ii-review/4"">AF targeting pad</a>” when you hold the camera up. I don’t know how well that works, but the solution probably is somewhere along those lines. (I guess this is where the focus-and-recompose photographers cheer.)</p>
<p class="pic"><a href="https://blog.timmschoof.com/images/EM10-hands-close.jpg"><img src="https://blog.timmschoof.com/images/EM10-hands-close.jpg" /></a><br />Olympus E-M10, with the Olympus 45mm f1.8 lens – photo by <a href="http://silvandaehn.com/">Silvan Dähn</a></p>
<h2 id="handling-the-e-m10">Handling the E-M10</h2>
<p>Because I’m overall endorsing the E-M10 here, I feel that I have to express a warning: The <strong>Olympus menus</strong> are a <strong>mess</strong>. They clearly stand in the way of the camera’s potential. A lot of stuff is possible with custom button and dial configurations and modes and settings etc., but it takes dedication to get there. Just a warning. These help: Here’s a <a href="http://malamut-de.blogspot.com/2013/04/einstellungsfuhrer-zur-om-d-e-m5-und.html">guide in German</a>, here’s a <a href="http://www.adorama.com/alc/0014709/article/olympus-om-d-e-m10-guided-tour">shorter overview</a>, and this <a href="http://www.dpreview.com/articles/9115179666/user-guide-getting-the-most-out-of-the-olympus-e-m5">DPreview thing about the E-M5</a> also mostly applies to the E-M10.</p>
<p>Another gripe: although <strong>Auto ISO</strong> is a nice feature, it is badly implemented: The camera is way too conservative with the ISO and doesn’t let you take full advantage of the excellent IBIS (in-body image stabilization). DPreview has a <a href="http://www.dpreview.com/articles/9115179666/user-guide-getting-the-most-out-of-the-olympus-e-m5/2">workaround</a> that makes it a <em>little</em> better. I still mostly set the ISO manually. This would be the second most annoying thing about the E-M10 for me, after the AF selection. But it’s a pure software thing, which makes it more incomprehensible that Olympus doesn’t just fix it with an update.</p>
<p>Enough with the negative though. There’s talk about the short <strong>battery life</strong> with mirrorless cameras. While you are not gonna get close to a DSLR’s, I found very long shooting sessions with several hundred (ca. 500 with one charge? I don’t remember for sure, but up there somewhere) exposures were no problem at all, meaning it’s not close to annoyingly short for me. With the E-M10, the key is to use the EVF and have the main screen not act as a viewfinder and suck battery when you don’t even need it. This way, waiting for a shot, you can dial the settings in, and the camera just lights up a display instead of transmitting the sensor image to the screen as well.</p>
<p class="pic"><a href="https://blog.timmschoof.com/images/EM10-street-hold.jpg"><img src="https://blog.timmschoof.com/images/EM10-street-hold.jpg" /></a><br />Olympus E-M10, with the Olympus 45mm f1.8 lens – and me, not knowing how to hold a camera – photo by <a href="http://silvandaehn.com/">Silvan Dähn</a></p>
<h2 id="closing-thoughts">Closing thoughts</h2>
<p>I hope this is a useful overview and example of what could be important to think about in a new camera. I’m only a few months in with my E-M10, but I’m exceedingly happy with it.</p>
<p>When I got the E-M10 Mark I, it was the best deal, no question. Now, it actually got a bit worse, and the Mark I and II are closer together now, within about 150€. If the small improvements sound interesting to you, maybe that’s worth it. Check out <a href="http://www.dpreview.com/reviews/bang-for-the-buck-olympus-om-d-e-m10-ii-review">DPreview’s E-M10 Mark II review</a>. A used E-M5 might also be an option, if you can stomach the lack of Wifi.</p>
<p>If you’ve made it this far, you must be really interested. If you’re already looking for <strong>lenses</strong>, <a href="http://thewirecutter.com/reviews/the-first-micro-four-third-lenses-you-should-buy/">the Wirecutter has a good piece</a>, worth a read.</p>