The recent Oracle financial analyst meeting on September 12, 2024, featured Larry Ellison, Oracle’s chair and CTO, sharing his brilliant and thought-provoking insights on the tech industry and Oracle’s strategy. While many of his remarks were accurate, some require further examination, especially regarding global policing due to regional governance differences in the U.S.
Thank you for reading this post, don't forget to follow and signup for notifications!
Agreeing with Ellison
Ellison emphasized the transformative potential of Oracle’s Cloud Infrastructure (OCI), highlighting its performance, security and cost efficiency. He noted that AI’s dependency on vast amounts of data is crucial, which I agree with, especially in policing, where more data enhances AI’s ability to identify patterns, learn and make accurate predictions.
I continue to assert that AI on the back end, freeing humans to interact with the public, is essential. Ellison highlighted Oracle’s system design, suggesting that minimizing human labor reduces errors and increases security, similar to secure online transactions versus physical credit card use. He also emphasized the vital role of the company’s IoT framework in modern policing, enabling real-time data collection, analysis and response for effective police operations.
Ellison highlighted the potential of Oracle’s IoT framework, which I see playing a role in resource management among many police automation functions, where smart inventory systems can track and maintain equipment, ensuring optimal conditions and availability.
Ellison’s argument for quicker and more affordable autonomous systems and data centers is crucial. This supports my belief in the importance of transparent data dashboards and autonomous backend systems in police agencies. It also liberates personnel from administrative tasks, allowing them to increase community engagement.
Debunking Ellison’s statements
However, not all of Ellison’s claims withstand scrutiny. Despite technological advancements, police must uphold constitutional rights, even if it may be deemed less efficient or safe. Ellison asserted that cameras and AI in schools would enhance safety, but the Fourth Amendment and school policies against recording children complicate matters. Some schools have cameras, but police cannot always access or use the footage, adding further complexity.
Ellison neglected to highlight the crucial role of having accurate data within the system, which is fundamental before considering how AI can be beneficial. The integrity and diversity of data are fundamental to the effectiveness and fairness of AI systems in policing. Proper data management ensures that AI algorithms do not perpetuate existing biases but rather promote accurate and equitable outcomes. For instance, in the case of predictive policing, where AI algorithms analyze historical crime data to forecast future criminal activities, if the data used in these systems is skewed or biased, it can result in disproportionate targeting of certain communities. For example, if historical data reflects over-policing in specific neighborhoods, the AI system may continue to predict higher crime rates in those areas, perpetuating a cycle of surveillance and enforcement, which is a result of a discussion from a project I have been working on with the National Academies.
Ellison talked about Oracle body cameras and mentioned that while officers can request the camera to be off during bathroom or lunch breaks, it always records, although footage would only be accessible with a court order. This implies officers have no reasonable expectation of privacy, which could be a concern if some unions or agencies are unaware of this and are currently deploying this platform. Additionally, many departments allow the public to request that officers turn off their body cameras for various reasons, therefore officers would be eroding trust with the public when cameras are not actually off.
AI can assist in video monitoring for various reasons, such as supporting street staff, transcribing footage and ensuring accountability, which will improve safety and efficiency. However, some studies indicate that body cameras don’t significantly affect the use of force, so one should be cautious in assuming that constant monitoring will suddenly make a difference.
Ellison suggested that AI will supervise every officer, reporting issues as they arise. This must be balanced with respecting individual rights and used for mentorship. Studies indicate that administrative oversight contributes to stress and staffing shortages, underscoring the need for balanced leadership.
Ellison stated that drones can respond faster than police cars, adding value by gathering information. This doesn’t replace officers who still have to respond and intervene. Additionally, cell phone cameras can provide instant live feeds during calls, further aiding in information gathering without Fourth Amendment risk issues.
Using autonomous drones to assist in tasks like spotting forest fires or detecting arson can be challenging, though sometimes feasible. AI-powered cameras already monitor for fires, and drones could be deployed autonomously for specific missions in the future. However, privacy concerns, as highlighted in Leaders of a Beautiful Struggle vs. Baltimore Police Department case, mean police departments must carefully balance AI deployment concerning a risk matrix. In this case, the Fourth Circuit upheld the denial of a preliminary injunction, ruling that Baltimore’s Aerial Investigation Research (AIR) program did not violate the Fourth Amendment, as it imposed minimal invasions of privacy and served a legitimate law enforcement purpose by monitoring public movements without identifying specific individuals.
I recently presented at the 2024 NIJ Research Conference on AI in Policing, emphasizing the balance between technological advancements and ethical considerations. I urged developers and law enforcement to adopt a human-centered approach, safeguarding constitutional rights and ensuring AI enhances rather than replaces human interaction in policing.
While AI offers substantial benefits like enhanced situational awareness, improved response times, and valuable data analysis, we must avoid over-reliance on technology. There is a fine line between promoting best behavior and creating a chilling effect where people alter their actions out of fear. Over-monitoring can infringe on personal freedoms and stifle creativity. It’s vital to ensure AI aids human judgment while respecting constitutional rights and upholding public trust.
POLICE1 RESOURCES ON ARTIFICIAL INTELLIGENCE
Artificial Intelligence
Can ChatGPT help law enforcement?
Artificial Intelligence
Ethical considerations for using ChatGPT for law enforcement