The tech-savvy interpreter 2.0: Interview with Barry Olsen on the history of interpreting technology
Announcing the Tech-Savvy Interpreter 2.0! 🎉
For the last five years, our good friend and colleague Barry Olsen has written the Tech-Savvy Interpreter column in the The Tool Box Journal - the premiere technology newsletter for translators.
As you may know, Barry recently took a position with KUDO and has stepped down from writing his monthly column on #terptech.
Your two favorite tech geeks thought it would be a shame if the column disappeared. So we threw our hats into the ring, and here we are, bringing you the first edition of The Tech-Savvy Interpreter 2.0.
Every month, you’ll hear from the two of us about interpreting technology.
To start off, we sat down for a virtual chat with Barry to reflect on how interpreting technology has developed in recent years. This month's column includes the first half of that interview. Or, check out the video of the entire interview below!
On the history of interpreting technology - An interview with Barry Olsen
Barry, thank you for joining us for this passing of the baton.
My pleasure. I’m thrilled that the Tech-Savvy Interpreter is going to continue. There’s so much happening in this space. Knowing that you two are here brings a sense of relief and excitement.
Tell us about the beginnings of the Tech-Savvy Interpreter.
The Tool Box Journal has been around for a long time. Jost Zetzsche has always focused on translation technology: from machine translation to translation memories to terminology extraction, among many other things. Around 2016, interpreting technology became more prominent. Seeing my interest in technology, Jost invited me to write a monthly column on interpreting technology for the Tool Box Journal. I didn’t think twice about it. It was thrilling to share what I had been learning and exploring: basic technologies that underlie interpreting, remote interpreting, glossary management and more. And it's been a wonderful experience.
How does it feel to write a column about interpreting in a newsletter about translation technologies?
I think it makes complete sense to have an interpreting technology column. There's a bit of overlap. It's also interesting to bring the two “subcommunities” together to have a little more exchange and cross pollination.
I’ve always said that translators were ahead of interpreters when it came to adoption of technology. Would you confirm that impression?
Very much so. The curve of technology adoption for translators is much farther along than it is for interpreters. Our current situation with Covid-19 is accelerating adoption of remote interpreting, because we really don’t have much of a choice at the moment.
What did things look like when you first started writing about interpreting technology for the Tool Box Journal?
I remember sharing my interest in technology with an interpreter that I very much respect. He looked at me and said:
That sentiment is obviously shifting as we have been forced to move online to work during this extraordinary time. The changing technology landscape is probably the biggest shift that I've seen. Early remote simultaneous interpretation platforms had a little bit of funding, but were limited in what they could do and had to try to find clients to become successful companies.
Are interpreters open to new technologies?
As a group of professionals, we consider ourselves special. And I don't think that's a bad thing; we do something that is unique. Sometimes we don't think we are part of larger trends. As I looked at adoption of new technologies (or lack thereof), it became apparent that this is all part of a much larger process. If we can understand that, we can better understand how we are likely to be affected. This motivated some of my initial columns.
Tell us about the technological developments you saw while writing the column. What developments were exciting or crucial for the field of interpreting?
The groundwork for the big developments that are affecting interpreting now was laid almost a decade ago. Early telepresence rooms were extremely hardware heavy. All of the codecs were still proprietary. Systems couldn’t talk to one another. Companies wanted to lock everything down and control it. The bandwidth requirements were astronomical. The installations were expensive and took up a lot of space. But they were able to produce excellent video and audio. Man, were they cool.
I remember going to a telepresence studio. Whenever I would do chuchotage, the camera would turn on and show me whispering into the ear of the delegate. These systems weren’t designed for multilingual communication. It wasn’t even on their radar screen.
The next hurdle was standards-based conferencing: Lifesize couldn't connect with Cisco, which couldn't connect to Polycom, and so on. There were still some codec wars going on. The big players were jostling for that position.
Another development that really opened up remote simultaneous interpretation was WebRTC. This became one of the standards for browser-based communications. Within a WebRTC-compliant browser, you can exchange video, audio and data. When that happened, everything went virtual.
I also remember the early days of the cloud. I was in Silicon Valley, and they showed us some of the first cloud implementations. Suddenly, if you needed to scale up because you needed more storage or computing power, in a few clicks you could have a server with ten times the capacity. This laid the foundation for what we can do with WebRTC today.
Why was WebRTC such a big deal?
WebRTC lowered the barrier to entry. Scrappy startups with an idea could begin to build and experiment and create. When WebRTC came about, everything went virtual. It’s all zeros and ones. It’s all cloud-based. And you were able to see people innovating in new ways you hadn’t seen before.
I remember going to a conference in Santa Clara and seeing a group of guys that wanted to show how they had achieved speech-to-speech translation for the first time. And I initially thought it was terrible. I was looking at these things - wrong prepositions, misunderstood polysemy - and thinking it was an utter failure. But the people in the room were ecstatic. They were totally focused on the technological side. The language side was just the throughput.
And I realized that our communities were talking past each other. What we need are communities that understand each other so we can actually find tools that'll be useful for us. So we can understand what tech can do, what humans can do and what humans and tech can do together.
Stay tuned!
In the next newsletter, we’ll hear from Barry about the future of interpreting technologies.
PS. Questions or ideas about interpreting technology? Drop us a line at info@techforword.com! We do the research, so you don’t have to.