By Lieutenant Tai Seki
Thank you for reading this post, don't forget to follow and signup for notifications!
In the 1960s “Star Trek” episodes, Captain Kirk used his handheld universal translator device to understand and respond to the many languages spoken in the universe. The device could quickly translate to allow for natural conversation to take place between speakers of different languages.
While Captain Kirk’s translator is not yet a reality, the technology to support such a device is progressing at astounding speeds. In law enforcement, police officers face situations daily in which language translation is not only needed but needed quickly. How can emerging language-translation technology become incorporated into police departments in the future?
In 2008, the Los Angeles Police Department purchased several Phraselators, handheld language translators developed for the U.S. armed forces. Created in 2001, the device contains preprogrammed questions and answers that can be broadcast in 40 languages.  LAPD officials said the Phraselators would be used to assist in everyday translation during criminal investigations and crowd control situations.  But the devices lacked the ability to perform voice-to-voice translation, and only the prerecorded phrases could be broadcast. Officers using the new devices found themselves spending more time staring at the Phraselators and trying to make them work than having meaningful conversations.
Translations are at your fingertips
Currently, there is no shortage of downloadable smartphone language-translation applications. Google Translate, for instance, allows text messages to be quickly translated into more than 100 available languages.  The app also has a conversational voice-to-voice feature that can be used between two different languages.
While this app is much more versatile than the Phraselator, it has limitations. The app works best when the conversation is between two people who speak slowly and concisely. Even then, inaccurate replies consistently occur. Once a third person is introduced in the conversation, replies become disorganized, and repeating certain phrases becomes mandatory.  Despite these limitations, this app has shown enormous potential for language translation in conversational situations.
In addition to handheld language-translation devices, there are currently multilingual chatbots that appear on many websites. It’s common when Internet surfing to encounter pop-up windows in which a “representative” of the company whose website is being visited asks if they can be of help. These “representatives” are tools known as chatbots and can be very effective at generating consumer engagement with a product or service. As language-translation software becomes more advanced, these chatbots’ multilingual capabilities are allowing consumers to have text conversations with the company’s “representative” in any language. 
To understand how these various language-translation platforms work, a quick review of key terms is necessary. Artificial intelligence (AI) is a technology that allows computers to “strive to mimic human intelligence through experience and learning.” Machine learning (ML) refers to a computer’s ability to learn from its past experiences without explicit programming. Natural language generation (NLG) focuses on the creation of text by a machine. Conversational AI applications allow humans to have conversations with computer systems.  Through a combination of these platforms, lifelike conversations and translations between humans and machines are becoming the reality.
The next evolution: ChatGPT
The next evolution of this combined technology is coming into focus through the work of the Bay Area tech company OpenAI. Cofounded by Tesla’s Elon Musk, OpenAI has released to the public a text-creating AI tool called ChatGPT, which can answer questions in a humanlike fashion, solve computer coding issues and produce articles with ease that humans have difficulty completing.  GPT stands for “generative pretrained transformer”; its text-creating capabilities stem from a huge sample of text taken from the Internet. ChatGPT’s most impressive capability is its ability to be creative and unique. When asked to do so, it can create a unique poem or solve complex math problems in various ways. And, of course, it can quickly translate questions, small phrases and even long articles within seconds.
Sam Altman, OpenAI’s CEO, projects ChatGPT tools will become the main research tool for academia and business alike. Altman said, “We can imagine an ‘AI office worker’ that takes requests in natural language like a human does. Soon you will be able to have helpful assistants that talk to you, answer questions and give advice. Later you can have something that goes off and does tasks for you. Eventually you can have something that goes off and discovers new knowledge for you.” 
As AI tools such as ChatGPT become more widely used and accepted, they have the potential to be embedded in everyday devices, like smart speakers, cellular phones and vehicle computers. People will be able to engage back and forth with the tools in various languages to solve real-world problems or simply converse with it, as moviegoers saw in the 2013 movie “Her”. Beyond mere entertainment, though, do these technologies offer an opportunity for the police to improve the ways they do business?
How can artificial intelligence support law enforcement?
Artificial intelligence is already playing a supportive role in law enforcement databases and software solutions. For example, based on its data analysis, AI software can identify behavioral patterns and make predictions of potential future crimes.  Developers envision that police officers will be able to engage with AI tools in their patrol vehicles.  For instance, if ChatGPT AI tools were embedded in patrol vehicles’ on-board computers, officers would be able to discuss crime issues with it, get real-time threat assessments based on past call history, receive criminal history information and have legal questions answered – all while driving on routine patrol.
According to the nonprofit news organization CalMatters, which explores solutions to quality-of-life issues, body-worn cameras have become a part of many officers’ standard uniforms.  As the Internet of Things (IOT) expands into body-worn cameras, officers will find their devices connected to their computer-aided dispatch (CAD) systems and have internal GPS technology capable of placing them on-scene at investigations.  These next-generation body-worn cameras will have interactive features that let dispatch know if officers are in a vertical or prone position, which can aid in officer-down situations, and embedded facial recognition tools.
All these cutting-edge developments are on the near horizon and create strong signals for how language-translation tools will be made available to police in the next decade. Imagine a future when AI tools such as ChatGPT power body-worn cameras. Police officers would wear an earpiece connected via Bluetooth to the body-worn camera. The AI tools in the camera would monitor statements made or questions asked in one language and transmit them to officers’ earpieces in another. Police officers would then respond in their own language, and the body-worn cameras would broadcast the translation.
ChatGPT is primarily a text-creating tool. This function will allow it to generate police reports based on interviews or interrogations with witnesses, victims and suspects. Officers will simply turn on their body-worn cameras, ask the appropriate questions for their investigation and allow the AI tool to create a written report.
Ethical concerns remain
With any emerging technology comes the need to review ethical concerns and privacy issues. For instance, if AI tools can gather call histories and criminal background checks for law enforcement officers en route to calls, there need to be safeguards to protect this confidential information from being misused. Encryption of radio frequencies is a way of safeguarding this information. Routine audits of how data is being used by officers is another.
Implicit bias on the part of AI software is a real concern. It is important to understand that humans code AI software. As humans write code, our unconscious thoughts, feelings and behaviors may be displayed in the form of implicit bias that can manifest itself in various ways. For instance, one ChatGPT user asked the platform “whether a person should be tortured.” ChatGPT’s answer was yes, if the person was from North Korea, Syria or Iran. 
One recent study into facial recognition software highlighted that AI tools were able to discern sexual orientation of individuals at higher levels than human judges.  This illustrates how easily data collected from open-source Internet sites and personal social media accounts can expose individuals’ private information. Erosion of this privacy “can lead to an increase in identity theft and loss of civil liberties due to government surveillance.”  Technology has a way of moving forward despite these ethical and privacy concerns, and it will become incumbent upon citizens to hold their government agencies accountable for what information is being collected and why. Similarly, it is just as important for law enforcement agencies to be transparent with the communities they serve on how AI is being used.
As law enforcement agencies incorporate AI technologies into everyday use, overreliance on these systems can lead to unintentional racial profiling or harm in the very communities officers are trying to protect and serve. In 2019, Californians passed Assembly Bill 748, which mandated law enforcement agencies release to the public any video footage within 45 days of officer-involved shootings or other high-profile uses of force. One can argue this bill has led to an increase in police transparency. For future uses of AI technology, law enforcement agencies would be wise to develop policies and procedures for releasing similar video footage of the technology in action and data for how often the technology is used. This may help alleviate citizens’ privacy concerns.
While we’re not yet in a real-life “Star Trek” episode where universal translators are the norm, technology is quickly advancing in that direction. Advancements highlighted in this article signal how the future of language translation may soon manifest itself in law enforcement. The 2020s may become the decade in which the police engage the communities they serve with fluency in the languages spoken by everyone with whom they come into contact. Agencies that embrace cutting-edge language and communications technologies in a way that’s transparent and well thought out will fare better in the near term than those that do not. In turn, those agencies will have more receptive communities rallying around them – in whatever language anyone might want to phrase it.
1. Harrison A. (March 9, 2005.) Machines Not Lost in Translation. Wired.
2. Barco MD. (Jan. 30, 2008.) Phraselator Helps L.A. Police Communicate. NPR.
3. Turovsky B. (Nov. 15, 2016.) Found in translation: More accurate, fluent sentences in Google Translate. The Keyword.
4. Birney A. (Dec. 26, 2022.) No hablo Espa?ol? Don’t rely on Google Translate to save you. Android Authority.
5. Grech M. Multilingual Chatbots Are the Key to In-Language Support. Smartling.
6. Perfetto C. (Dec. 16, 2022.) What’s Next for Conversational AI? Christina Perfetto. CIO.
7. Lock S. (Dec. 5, 2022.) What is AI chatbot phenomenon ChatGPT and could it replace humans? The Guardian.
8. Levente. (Dec. 26, 2022.) Could ChatGPT’s AI Ever Replace Humans? Medium.
9. Wired. (Oct. 14, 2022.) How Will Artificial Intelligence Affect Policing and Law Enforcement? AI Plus Info.
10. Fallon C, Cook K, Tietje G. (Nov. 3, 2021.) Human-Machine Teaming: A Vision of Future Law Enforcement. Domestic Preparedness.
11. Lyons B. (Mar. 3, 2022.) California Highway Patrol lags local police, other states in officer body cams. CalMatters.
12. Harris S. Product Feature: The Continuous Evolution of the Body-Worn Camera. Police Chief, 2022.
13. Biddle S. (Dec. 8, 2022.) The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques. The Intercept.
14. Wang Y, Kosinski M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. APA PsycNet, 2018.
15. Tufekci Z. (Feb. 2008.) Can You See Me Now? Audience and Disclosure Regulation in Online Social Network Sites. Sage Journals.
About the author
Tai Seki is a lieutenant with the Alhambra Police Department in Southern California. He has worked a variety of assignments over a career of more than 21 years. He currently oversees the department’s Investigations Bureau and is its SWAT team commander. He graduated from the California POST Command College Class No. 69 in February 2023.