Meeting Report: Strategies for Personal Monitoring

Meeting Topic: Strategies for Personal Monitoring

Moderator Name: Jay Dill

Speaker Name: Gino Sigismondi

Meeting Location: Online

The Central Indiana Section hosted Shure’s Gino Sigismondi for a presentation on personal or “in-ear” monitoring (IEM) systems.  The presentation began with an overview of IEM systems and a history of the technology.  As early as 1982 Marty Garcia built custom-fit earphones for stage use, and the first wireless IEM system was employed in the late 1980s using a simple FM transmitter.  By the late 90s, custom-built hardware gave way to commercial wireless IEM systems and “universal fit” earphones, greatly increasing IEM adoption.  The 2010s saw further advancement in diversity RF receivers, increased affordably, and personal mixing options.

Early in Gino’s overview of IEM system architecture, the topic of earphones was breached.  Despite seemingly endless earphone options, isolation was presented as the most significant consideration, as it provides the ability to hear a mix while maintaining a reasonable listening levels.  This point was reinforced later in the presentation when discussing the dangers of IEM listening at extreme levels, including issues with users removing one earphone.  Instead, Gino recommended the use of ambient mics or ambient headphone systems to provide audience feedback to IEM wearers.

From this broad system overview, Gino presented options for IEM system configuration, including receivers sharing a mix via a single transmitter, dual monophonic mixes from a single transmitter, and traditional stereo mixes. While stereo mixes provide a more realistic listener experience, both stereo and mono setups require tradeoffs.  Gino then presented a third option where each user receives two separate mono signals which can be balanced at the receiver, giving the user some local control of the mix.  The topic of distributed mixing was also introduced, along with potential pros and cons of such a system.  Case studies of scenarios for use of each of these options were presented, as well as example systems.

The presentation then shifted to the topic of RF management for IEMs.  Gino advocated for the use of inclusion groups, where wireless devices are segregated by type, with each using a different segment of available frequencies.  Similarly, wireless mic and IEMs units, and their antennas, should be physically isolated to reduce RF interference.  Proper antenna selection can also increase system effectiveness, with directional antennas being a significant way to reduce multipath dropouts.  Likewise, antenna combiners can help to reduce intermodulation issues within larger systems.

The presentation concluded with an audience Q&A.

Written by: Brett Leonard

Meeting Report: Exploring a Virtual Intercom System

Video from our last event is now posted on our YouTube page.

Meeting Topic: Exploring a Virtual Intercom System

Moderator Name: Jay Dill

Speaker Name: Hal Buttermore, Telos Alliance

Meeting Location: virtual (Zoom webinar)

Summary

The Central Indiana Section was joined by the Indianapolis section of the Society of Broadcast Engineers for a discussion of network-based intercom systems with Telos Alliance’s Hal Buttermore. The program began with a discussion about the paradigm shift involved in moving away from an intercom system based on dedicated hardware with a solution based on off-the-shelf computer and network hardware. Such a system feels similar to traditional intercom systems, with a variety of belt pack, desktop, and racked communication stations with typical party-line and talk-group configurations while taking advantage of current network technology, including AES67 integration. This creates a system that is lower in cost, easily scalable, and with greater end-user configurability.

An additional benefit of this system architecture is the ability to use existing VoIP technology for relatively simple interconnection between remote facilities. Telos employs the open standard OPUS codec for network communication between facilities. This wide area network can link remote production crews, separate affiliate studios, or even remote personnel to a central facility utilizing the same group and party lines. Included in this functionality is a “lite” mode for use with even typical residential wireless Internet and cellular modems. This capability was particularly useful through the pandemic, allowing production personnel working from home to share a reliable, dedicated intercom system.

Hal also highlighted the user configurability of such a system. In Telos’s Infinity intercom system, a simple drag-and-drop interface allows users to create party lines, talk groups, and direct connections within the system. A simple user interface allows receivers to be dropped into groups or party lines, or to create interruptible foldback (IFB) channels with near instant routing and availability. Finally, these routed groups, etc. can be dragged onto talk buttons for configuring belt packs, panels, or consoles as each user requires.

Finally, information was presented about Telos’s VIP system, which fully removes the requirement for hardware, replacing it with a single, dedicated intercom server. This Internet-connected server can then accommodate up to 16 virtual control panel/belt pack systems run through typical browsers, allowing operating system-agnostic usability with a simple link and password access system for users. This, combined with optional cloud-hosted servers and licensing based on virtual intercom quantity, allows the system to become scalable as needed while maintaining lower overall cost.

The program concluded with Q&A, moderated by section chair Jay Dill. The complete webinar is available for viewing on the CIS YouTube page: https://youtu.be/TwT2dTBLDh8

Written By: Brett Leonard

Meeting Report: Historical Development of the Unidyne

Meeting Topic: Historical Development of the Unidyne I, II, III

Moderator Name: Jay Dill & Nate Sparks

Speaker Name: Gino Sigismondi & Michael Pettersen, Shure Inc.

Meeting Location: virtual (YouTube: https://youtu.be/bvg6FYMRuAs)

Date: January 21, 2021

Summary

Shure’s Michael Pettersen and Gino Sigismondi joined the Central Indiana Section to dive into the storied history of the Unidyne dynamic microphone motor/capsule and the subsequent evolution that have shaped our industry.
Michael began the program by taking us back to the original electrical equivalent diagrams written in Benjamin Bauer’s notebook in 1937, describing what would become the “Uniphase Network” to create a single-capsule, directional dynamic microphone. The design was complimented by the work of designer Wesley Sharer, along with a little inspiration from the grill of the ’37 Oldsmobile Coupe Six, and was released as the Unidyne Model 55 in 1939.
The Unidyne II was first released within the Model 55S in 1951. The Unidyne II features the same performance with a size some 30% smaller than the original Unidyne capsule, thus the “S” nomenclature for small. The new design was oriented towards the television medium, which considered the original Model 55 to be somewhat obtrusive.
The Unidyne III Model 545 was released in 1959, touted as the “smallest cardioid dynamic microphone” ever. The Model 545 was end-addressed, and therefore had a more consistent polar pattern than similar and competing models of the era. This led to popularity with the burgeoning sound reinforcement industry, as the pattern’s consistency allowed for more gain before feedback. The Beatles were marquis users of the Model 545 with the A25B swivel mount.
As the Model 545 gained popularity on stage, Bob Carr worked on a line of Unidyne III-based microphones to appeal specifically to studios. This line, dubbed “Studio Microphones” (SM) consisted of microphones using the same capsules as existing mics, but with less reflective finishes, no switches, and included XLR connectors. The venerable Unydine III SM56 was released in 1964, with the SM58 released just two short years later. While not instant sales successes, the use of the SM56 at the Monterey Pop Festival by McCune Sound in 1967 yet again raised their profile in the live sound arena. The push as a live sound microphone line occurred more in the 1970s with their introduction to the performers and sound companies in Las Vegas, with artist such as Frank Sinatra becoming devoted users. Also of note was the introduction of the now-famed SM7 in 1972.
Following the history of the Unidyne series, Gino and Michael took audience questions, as well as providing a little “audio mythbusters” surrounding the Unidyne family.

Written By: Brett Leonard

Meeting Report: Careers in Audio Panel

Meeting Topic: Careers in Audio: What’s your Connection?

Moderator Name: Jay Dill & Tim Hsu

Speaker Name: Alan Alford (Indianapolis Symphony Orchestra), Elizabeth Alford (Jonas Productions and freelance), Luke Molloy (Diversified), Clem Tiggs (freelance), Gavin Haverstick (Haverstick Designs)

Meeting Location: virtual (YouTube: https://youtu.be/TdUb9vlL8i4)

Summary:

This meeting put together a dynamic group of panelist to discuss different career paths and roles within the audio industry, moderated by Jay Dill and Dr. Tim Hsu. The meeting began with short introductions of each panelist, and a description of their roles within the larger audio industry, including job duties and responsibilities.


Alan Alford of the Indianapolis Symphony began by detailing his job as an IATSE stage hand and audio engineer for both indoor concerts at the Hilbert Circle Theatre and outdoors at the symphony’s summer home at Conner Prairie. Alan briefly detailed his non-audio duties, but focused on his work providing live sound reinforcement as well as advancing audio for guest artists and providing support for audio recordings.
Elizabeth Alford then described her varying roles working with Jonas Productions in their backline rental, and her focus on wireless technology and RF coordination. Her shift to RF coordination has put her in roles managing everything ranging from small wireless packages to massive shows with 80+ channels of wireless microphones and in-ear monitors. Elizabeth ardently emphasized the increasing need for knowledge of networking and general IT infrastructure in the modern production environment, as well.


Luke Molloy discussed his role as an audio-video system designer and drafting engineer. His work focuses not only on meeting the needs of a given system design and installation, but in configuring systems to fit within the physical and practical confines of the installation environment. Luke pointed out that his background in both audio and video dovetailed nicely with his engineering background to prepare him to this work.
Clem Tiggs described his work as an A2 on major film and television productions, describing the role as the “get it done” person, responsible for everything from placing mics to ensuring signal flow back to the A1 a the mix location. Likewise, a strong technical background, troubleshooting ability, and maintaining a positive rapport with clients/talents were also featured as necessary skills for the A2 role. Clem also highlighted the differences between freelance and traditional salary job, with the freedom of choice being a major upside, but with the caveat of requiring discipline and a strong independent work ethic.


The final panelist, Gavin Haverstick, presented his work as an acoustical consultant with a particular reputation for high-quality recording studio design. The marriage of musical background and engineering education served to propel Gavin towards a focus on musical designs and applications, where his consultancy focuses on recording studios, performances spaces, and multi-purpose auditoriums.


Following individual introductions, the panel took questions from the attendees addressing a variety of related topics.

Written By: Brett Leonard

Meeting Report: Introduction to Immersive Mixing: Atmos & Beyond

Meeting Topic: Introduction to Immersive Mixing: Atmos & Beyond

Moderator Name: Brett Leonard

Speaker Name: George Massenburg, McGill University and Massenburg Design Works

Meeting Location: virtual (YouTube: https://youtu.be/0FEV6llLMDU)

Summary:

Engineer and innovator George Massenburg joined the Central Indiana section for a discussion of immersive audio mixing, highlighting his recent experience with remixing major popular music artists in Dolby Atmos, as well as his experience with current and upcoming consumer delivery methods for immersive content.
George began the presentation with a brief history and overview of immersive audio, stretching back to stereo and early surround. The early use of binaural transmission in the 1881 Paris Opera telephone transmission was highlighted as a truly early form of immersive audio, despite being often overlooked. Further developments presented ranged from Alan Blumlein’s stereo innovations, quadraphonic recordings, Todd-AO surround, Dolby Stereo, DTS Surround, and other such formats. DTS Music Disc, DVD-Audio, and SACD were also highlighted as previous music-specific immersive formats. George was careful to highlight not only the success and innovations of many of these technologies and formats, but also to acknowledge some of the commercial shortcomings of earlier forays into immersive music.


Following the historical context of immersive audio, George moved into the realm of Dolby Atmos. Discussion began with the basic components of an Atmos mix: bed tracks and objects. George discussed the use of 7.1.2 (7.1 with two height channels) and 7.1.4 (7.1 with four height channels) channel-based bed tracks as the foundation for a mix, with the remaining 110+ reserved for objects that can be placed and manipulated outside of these defined speaker locations. George carefully defined the sometimes-nebulous term “object” in reference to Atmos, including their encoding in mixing and decoding during playback. George also provided a glimpse into his signal flow/studio setup for immersive mixing, with dedicated playback/mixing and capture/render computers and multiple monitoring formats and devices, including both professional monitor loudspeakers and consumer devices for immersive playback.


George then ask for questions from viewers online, received in via YouTube chat, text, etc. This garnered an incredible range of immersive audio sub-topics, including the differences between mixing/re-mixing for immersive rather than capturing immersive content from the recording stage. The difficulties in re-mixing content that exists in an artist-approved stereo iteration was also discussed. George was careful to note that one of the first steps to an immersive remix is often to recreate the existing stereo mix, then branching out from the sounds, feelings, and intentions existing in that format. Questions regarding consumer delivery were also addressed, with topics such as single-point immersive systems (e.g. sound bars, wireless home devices, etc.), binaural renderings, MPEG-H encoding, and mobile audio all discussed.


George both started and ended the evening on an uplifting note, emphasizing the fact that immersive audio opens up a world of opportunities for increased artistry. Our goals as immersive content creators should be to provide a truly special and authentic experience for artists and listeners alike.

Written By: Brett Leonard

Meeting Report: Automatic Mic Mixing

Meeting Topic: Automatic Microphone Mixing: How and Why?

Moderator Name: Jay Dill and Nate Sparks

Speaker Name: Michael Pettersen and Gino Sigismondi, Shure

Other business or activities at the meeting: General welcome, introduction to the section and section’s website/social media, and information on joining the AES for non-members.

Meeting Location: Online (YouTube stream with Q&A)

Summary:

Moderator’s Jay Dill and Nate Sparks joined Shure’s Michael Pettersen and Gino Sigismondi for the Central Indiana Section’s inaugural webcast to discuss the history and current state of automatic microphone mixing. The presentation began with an in-depth overview of the history of automatic mixing dating back to the original concept brought forward by famed theatre sound designer Dan Dugan. Dugan’s initial concept allowed a theatre mixer to offload the task of muting and unmuting (or fader riding) multiple microphones as actors delivered lines and entered or left stage. This functionality helps optimize gain before feedback, prevented comb filtering, and reduced buildup of background noise and reverberation.

Shure entered the automixing market in the early 1970s with the Voicegate, a speech-centric gating system. By the mid-70s, advancements allowed for variable threshold operation, as well as implementing gain sharing, a system which maintains a sum total gain for all open channels as channels are added or subtracted, thereby creating a more stable system. Further advances heralded a dual-element microphone with a secondary, rear-facing capsule providing a differential to ensure only on-axis input signals triggered unmuting, and system linking to allow for more channels.

The next wave of development included adaptation to ambient noise and the ability to work with non-proprietary microphones. This system grew into the famed FP-410, which included MaxBus, a system to ensure that the loudest receiver capturing a single source would remain open, a system to ensure that the last microphone used would remain open, and the implementation of “off-attenuation”, which used approximately 15 dB of gain reduction rather than full muting of sources. These technologies have rolled into the systems we know as IntelliMix.

As the world of audio migrated way from analog processing, IntelliMix went digital. While the aims of automixing remain the same, the processing tasks of signal detection, channel priority, gain-sharing, etc. have been merged into DSP-based systems in both hardware and software. Current automixing offerings retain this functionality, but also allow for configuration of all aspects of the system via a browser-based GUI. Traditional functionality can also be coupled with additional audio enhancement processing and digital I/O for maximum flexibility.

The presentation was facilitated by Force Technology Solutions’ live streaming studio, allowing broadcast-style graphics and switching, off-site production, and remote presentation from across the Midwest. The lecture can be can be viewed on the Central Indiana Section’s YouTube channel or directly at https://youtu.be/diWqymbEuhw.

Written By: Brett Leonard

Meeting Report: ReverBall and Music Facility Tour at IUPUI

Central Indiana Section Meeting Report
11/7/2019


ReverBall! – A Tour of the Music Technology Facilities and Open House at the Eskenazi Fine Arts Center at IUPUI (Indiana University-Purdue University, Indianapolis)

This meeting featured a tour of the classroom, recording, and lab facilities of the Music Technology Program on the IUPUI campus. It also included an open house-type event (ReverBall), hosted by the Herron School of Art + Design and the Department of Music Technology.

Dr. Hsu took the group through various spaces used by the Music Technology Program, including:

  • A control room and adjacent tracking studio.
  • A music rehearsal room.
  • An acoustics laboratory where experimental work was being done with impedance tubes and various acoustic panels of different sustainable materials.
  • The Tavel Center for Arts Technology where interactive/distance learning with local and remote
  • students takes place alongside current research in music technology.
  • A newly renovated piano lab used for keyboard and MIDI controller classes.

The Music Technology program IUPUI resides in the Purdue School of Engineering and Technology. They offer a Bachelor of Science, Master of Science, and Ph.D in Music Technology, as well as a Bachelor of Science and Master of Science degree in Music Therapy. Research in the department spans fields in audio, live performance technologies, acoustics, health, music therapy, and digital and acoustic instrument development.

The Open House event included several ensembles performing music, in some cases with homemade instruments or modified regular instruments and synthesizers. Mixed media performances included world premieres of works by both faculty and students.

The meeting was hosted by Dr. Timothy Hsu, faculty member in the Music Technology Program at IUPUI.

Meeting Report: An Evening with John Cooper

Central Indiana – July 18, 2019

Meeting Topic: An Evening with John Cooper

Moderator Name: Michael Petrucci

Speaker Name: John Cooper, freelance FOH mixer for Bruce Springsteen and other noted artists

Other business or activities at the meeting: It was announced that Section elections will commence at this time. Nominations are open and should be submitted to the Secretary. Voting will be done electronically (via special website). Results are expected on/about August 20, 2019.

Meeting Location: ESCO Communications, Indianapolis, IN

Summary

This evening’s guest presenter has been an FOH engineer/mixer for Bruce Springsteen since 2001. He has also worked as FOH engineer/mixer with other numerous other artists, including: John Mayer, Sheryl Crow, Keith Urban, and Lionel Richie. 

John talked about his approach and experience in mixing for major, live music performances. Some of the things he highlighted included: 
• Understanding and maintaining the proper gain structure. 
• A result that sounds good/acceptable, not something that reads ideally on a meter. 
• Having appropriate backup equipment and a strategy to deploy, when necessary. 
• Be cautious of level limits with digital consoles. Some people are using analog matrices to do certain mixes in order to work around these issues. 
• Use of delays to achieve some stereo effects from a mono source. 
• Protools can provide a virtual sound check. 
• The teleprompter is a key element in this scale of road show — everyone uses it to know the where they are in the show. There could be as many as 20 displays. Related to this — many shows are automated. 
• The entire stage is on UPS. 
• There is a definite difference in energy level between an afternoon rehearsal and an evening performance. 
• Bass guitar balance (with the rest of the band/orchestra) is a very important consideration. 
• Front fills are important, especially for the performer to be understood. Balance can be tricky and important. 
In regular business, biennial elections for the Section were announced and the process will commence promptly. The value of AES membership was highlighted, including product discounts (Apple, Dell, Sound Particles and Focal Press) plus career resources (profiles, forums, and job board postings from sustaining member companies).

Written By: Barrie Zimmerman, Secretary

Meeting Report – February 2013 – Central Indiana Audio Student Workshop

Keynote speaker Konrad Strauss addressing a great early-morning crowd.On February 16, 2013 the Central Indiana Section of the Audio Engineering Society hosted the Second Annual Central Indiana Audio Student Workshop. The event was hosted by Section Chair Fallon Stillman, and coordinated by Workshop Advisor Kyle P. Snyder with great assistance from the Executive Board of the Central Indiana Section as well as the faculty and staff of the Indiana University Department of Recording Arts. The Central Indiana Audio Student Workshop 2013 was held on the campus of Indiana University Bloomington, in the Department of Recording Arts studios and related facilities.

Like other regional events, the Central Indiana Audio Student Workshop was modeled like a mini-convention. Our goal was to provide an intimate learning environment, open to anyone interested in audio, including local professionals, university students, and high school students. The Workshop provided attendees the opportunity to improve their skills with some of the best in the business, who presented on topics in recording, mixing, live sound, and acoustics.

Marc DeGeorge of Solid State Logic discussing digital technology

Marc DeGeorge of Solid State Logic discussing digital technology.

Click for more pictures

We also wanted to provide the Workshop free of charge, to give students of all means equal access to the audio instruction we were providing. Also, not only did we want to provide high-quality instruction for free, but also we wanted to incentivize attendance with useful giveaways from sponsors. Finally, we wanted to ensure that an acceptable student to teacher ratio was achieved, so that students felt less like they were part of a crowd and more like they were in a small classroom where they could ask questions.

Our pre-registration topped out at over 250, and we saw physical attendance at over 200 including numerous walk-in’s, reaching a group of audio students and professionals from every corner of the state and many from neighboring territories, who were appreciative beyond words. We couldn’t have been more pleased with how the event turned out.

For additional information on the event including sponsors, posters, artwork, schedules, and much more please visit the official event site.

Additionally, the official event report is available for download (pdf).

Press: