The Central Indiana Section of the Audio Engineering Society presents “Historical Development of the Unidyne I, II, III”
Have you ever wondered how the Shure “Unidyne” became one of the most recognized and familiar items in the audio and broadcasting business?
Joins us as Michael Pettersen and Gino Sigismondi share historical information and photos from the Shure archives to give us an interesting and accurate history of the development and marketing of this classic microphone.
Central Indiana AES Section joins in toasting the New Year!
To help get the year 2021 started in a positive way, we have opted for a virtual “Social Hour.” This will be an opportunity to re-connect with your professional friends in the audio and related fields…
– catch up on what you’ve been doing
– chat about what’s ahead
– share any concerns or positive events during the pandemic
– chat about industry happenings, changes, trends
– find out which company bought out the other
We hope this will be a way to help engage members and professionals in looking forward to a solid recovery in the New Year. If there are enough participants, we will set up breakout rooms to provide a better opportunity to share among a smaller group.
Looking forward to seeing and hearing from you on January 5th! Don’t forget to bring your favorite drink so we can to toast to everyone’s health and well-being in the New Year.
This event is free and will be hosted virtually on Zoom. Register below at our Eventbrite page to receive connection info.
This meeting put together a dynamic group of panelist to discuss different career paths and roles within the audio industry, moderated by Jay Dill and Dr. Tim Hsu. The meeting began with short introductions of each panelist, and a description of their roles within the larger audio industry, including job duties and responsibilities.
Alan Alford of the Indianapolis Symphony began by detailing his job as an IATSE stage hand and audio engineer for both indoor concerts at the Hilbert Circle Theatre and outdoors at the symphony’s summer home at Conner Prairie. Alan briefly detailed his non-audio duties, but focused on his work providing live sound reinforcement as well as advancing audio for guest artists and providing support for audio recordings. Elizabeth Alford then described her varying roles working with Jonas Productions in their backline rental, and her focus on wireless technology and RF coordination. Her shift to RF coordination has put her in roles managing everything ranging from small wireless packages to massive shows with 80+ channels of wireless microphones and in-ear monitors. Elizabeth ardently emphasized the increasing need for knowledge of networking and general IT infrastructure in the modern production environment, as well.
Luke Molloy discussed his role as an audio-video system designer and drafting engineer. His work focuses not only on meeting the needs of a given system design and installation, but in configuring systems to fit within the physical and practical confines of the installation environment. Luke pointed out that his background in both audio and video dovetailed nicely with his engineering background to prepare him to this work. Clem Tiggs described his work as an A2 on major film and television productions, describing the role as the “get it done” person, responsible for everything from placing mics to ensuring signal flow back to the A1 a the mix location. Likewise, a strong technical background, troubleshooting ability, and maintaining a positive rapport with clients/talents were also featured as necessary skills for the A2 role. Clem also highlighted the differences between freelance and traditional salary job, with the freedom of choice being a major upside, but with the caveat of requiring discipline and a strong independent work ethic.
The final panelist, Gavin Haverstick, presented his work as an acoustical consultant with a particular reputation for high-quality recording studio design. The marriage of musical background and engineering education served to propel Gavin towards a focus on musical designs and applications, where his consultancy focuses on recording studios, performances spaces, and multi-purpose auditoriums.
Following individual introductions, the panel took questions from the attendees addressing a variety of related topics.
Engineer and innovator George Massenburg joined the Central Indiana section for a discussion of immersive audio mixing, highlighting his recent experience with remixing major popular music artists in Dolby Atmos, as well as his experience with current and upcoming consumer delivery methods for immersive content. George began the presentation with a brief history and overview of immersive audio, stretching back to stereo and early surround. The early use of binaural transmission in the 1881 Paris Opera telephone transmission was highlighted as a truly early form of immersive audio, despite being often overlooked. Further developments presented ranged from Alan Blumlein’s stereo innovations, quadraphonic recordings, Todd-AO surround, Dolby Stereo, DTS Surround, and other such formats. DTS Music Disc, DVD-Audio, and SACD were also highlighted as previous music-specific immersive formats. George was careful to highlight not only the success and innovations of many of these technologies and formats, but also to acknowledge some of the commercial shortcomings of earlier forays into immersive music.
Following the historical context of immersive audio, George moved into the realm of Dolby Atmos. Discussion began with the basic components of an Atmos mix: bed tracks and objects. George discussed the use of 7.1.2 (7.1 with two height channels) and 7.1.4 (7.1 with four height channels) channel-based bed tracks as the foundation for a mix, with the remaining 110+ reserved for objects that can be placed and manipulated outside of these defined speaker locations. George carefully defined the sometimes-nebulous term “object” in reference to Atmos, including their encoding in mixing and decoding during playback. George also provided a glimpse into his signal flow/studio setup for immersive mixing, with dedicated playback/mixing and capture/render computers and multiple monitoring formats and devices, including both professional monitor loudspeakers and consumer devices for immersive playback.
George then ask for questions from viewers online, received in via YouTube chat, text, etc. This garnered an incredible range of immersive audio sub-topics, including the differences between mixing/re-mixing for immersive rather than capturing immersive content from the recording stage. The difficulties in re-mixing content that exists in an artist-approved stereo iteration was also discussed. George was careful to note that one of the first steps to an immersive remix is often to recreate the existing stereo mix, then branching out from the sounds, feelings, and intentions existing in that format. Questions regarding consumer delivery were also addressed, with topics such as single-point immersive systems (e.g. sound bars, wireless home devices, etc.), binaural renderings, MPEG-H encoding, and mobile audio all discussed.
George both started and ended the evening on an uplifting note, emphasizing the fact that immersive audio opens up a world of opportunities for increased artistry. Our goals as immersive content creators should be to provide a truly special and authentic experience for artists and listeners alike.
Join us for this first in a series of roundtable discussions about different career tracks in the audio industry. This panel will feature five local (Indiana) professionals, representing the following career areas:
Live Sound & Basic Audio Operator Training
TV Sound & Freelance
Join us for this opportunity to hear more and ask questions about careers in these audio specialty fields.
Meeting Topic: Automatic Microphone Mixing: How and Why?
Moderator Name: Jay Dill and Nate Sparks
Speaker Name: Michael Pettersen and Gino Sigismondi, Shure
Other business or activities at the meeting: General welcome, introduction to the section and section’s website/social media, and information on joining the AES for non-members.
Meeting Location: Online (YouTube stream with Q&A)
Moderator’s Jay Dill and Nate Sparks joined Shure’s Michael Pettersen and Gino Sigismondi for the Central Indiana Section’s inaugural webcast to discuss the history and current state of automatic microphone mixing. The presentation began with an in-depth overview of the history of automatic mixing dating back to the original concept brought forward by famed theatre sound designer Dan Dugan. Dugan’s initial concept allowed a theatre mixer to offload the task of muting and unmuting (or fader riding) multiple microphones as actors delivered lines and entered or left stage. This functionality helps optimize gain before feedback, prevented comb filtering, and reduced buildup of background noise and reverberation.
Shure entered the automixing market in the early 1970s with the Voicegate, a speech-centric gating system. By the mid-70s, advancements allowed for variable threshold operation, as well as implementing gain sharing, a system which maintains a sum total gain for all open channels as channels are added or subtracted, thereby creating a more stable system. Further advances heralded a dual-element microphone with a secondary, rear-facing capsule providing a differential to ensure only on-axis input signals triggered unmuting, and system linking to allow for more channels.
The next wave of development included adaptation to ambient noise and the ability to work with non-proprietary microphones. This system grew into the famed FP-410, which included MaxBus, a system to ensure that the loudest receiver capturing a single source would remain open, a system to ensure that the last microphone used would remain open, and the implementation of “off-attenuation”, which used approximately 15 dB of gain reduction rather than full muting of sources. These technologies have rolled into the systems we know as IntelliMix.
As the world of audio migrated way from analog processing, IntelliMix went digital. While the aims of automixing remain the same, the processing tasks of signal detection, channel priority, gain-sharing, etc. have been merged into DSP-based systems in both hardware and software. Current automixing offerings retain this functionality, but also allow for configuration of all aspects of the system via a browser-based GUI. Traditional functionality can also be coupled with additional audio enhancement processing and digital I/O for maximum flexibility.
The presentation was facilitated by Force Technology Solutions’ live streaming studio, allowing broadcast-style graphics and switching, off-site production, and remote presentation from across the Midwest. The lecture can be can be viewed on the Central Indiana Section’s YouTube channel or directly at https://youtu.be/diWqymbEuhw.
Join us when guest presenters Gino Sigismondi and Michael Pettersen will be reviewing the various challenges automatic mic mixing was developed to address, along with the evolution of Shure products – from early designs to today’s advanced products.
They will note various applications for installed systems, such as: education, board rooms,courtrooms, legislative halls, convention facilities, houses of worship and broadcast studios.
Gino Sigismondi is Associate Director of Technical Support & Training at Shure, Inc. Michael Petterson is Director of Corporate History at Shure, Inc.
You must register via this Eventbrite Link (free) in order to receive the link to join the webinar:
Amidst the turmoil of the current global health situation, the Audio Engineering Society’s 148th International Convention is going online! Join us for the Virtual Vienna Convention, with online sessions from presenters including Thomas Lund (Genelec), Eddy Brixen (DPA), Nadja Wallaszkovits (Austrian Academy of Sciences), Daniel Belcher (d&b audiotechnik), Bob McCarthy (Meyer Sound), Alex Case (UMass Lowell;Focal Press), and many more!
Some live presentations will air in the morning through early afternoon in the Eastern timezone, while others are available on-demand. Registration has been drastically discounted.
What: Sound Reinforcement for Acoustic Jazz with Dr. Ian Corbett When: January 25, 2020 – 2:00pm – 3:30pm Where: University of Indianapolis, Christel DeHaan Fine Arts Center ( 1400 E Hanna Ave, Indianapolis, IN 46227 ) RSVP: Use our Eventbrite page or our Contact form
While all sound reinforcement scenarios share certain aspects, each genre of music can present unique challenges. Acoustic jazz can couple a large variety of instrumentation, a wide dynamic range, and a demand for high fidelity and clear sound. How does the modern engineer deal differently with reinforcement of this demanding musical genre? Dr. Ian Corbett, renowned live sound and location recording engineer, and author of Mic It!, presents his approach to operating in the world of acoustic jazz to create the ideal experience for both the audience and musicians on stage. Join us for discussion, live demonstrations, and Q&A!