Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Tech Tonic | Meta Llama’s spark, and countries vying for AI governance supremacy

It is human nature to take things at face value. Most of us would be guilty of doing that with Artificial Intelligence (AI) tools and chatbots. We were told, and we believed, that chatbots are the gospel for truth. Many of us also went around with prompts that often didn’t mean much, and chose which among OpenAI, Google, Microsoft, Perplexity and Meta had the superior AI. Meta AI, a basic experience integrated within WhatsApp and Instagram, often didn’t elicit much confidence. The thing is, Meta was purely sandbagging (you’d be partly right; it wasn’t outstanding). They didn’t want to show what the Llama models were capable of. Until now.
In the past few days and weeks, we’ve had a layered perspective on what Meta’s Llama models are capable of. OpenAI and everyone else have reason to be worried, though they’ve each made progress with their models. OpenAI’s own GPT-4o is as good as it gets, and the next iteration is supposed to be even more powerful (Sora Sojourn gives that perspective). In late September, Meta released an updated Llama 3 AI model, Llama 3.2 which became the first open-source model capable of processing images and text. It could prove useful for augmented reality (AR) developers. Meta’s got stakes there, with Ray-Ban Meta glasses going from strength to strength.
Mobile devices and edge computing weren’t ignored, with simpler, text-only models (1B and 3B parameters).
Just this week, Meta AI made it clear that it is ready for war. Meta says they’re making Llama available to US government agencies, which includes defence and national security applications for the country, as well as private sector partners working in sync. These include Accenture Federal Services, Amazon Web Services, IBM, Lockheed Martin, Microsoft, Oracle, and Palantir, to name some. The onus will be on these companies to integrate Llama models for the government.
There are examples of Llama that will be used, becoming clearer. Oracle will use the Llama models to simplify and make more coherent aircraft maintenance documents for technicians, with an expectation that it’ll help speed up diagnosis and repair time. Scale AI, another AI company, will deploy Llama as a layer of support for “specific national security team missions, such as planning operations and identifying adversaries’ vulnerabilities.” Lockheed Martin, they say, has incorporated Llama into its ‘AI Factory’, to speed up specifics such as code generation and data analysis. IBM’s Watsonx will integrate Llama for US national security agencies, for their own data centres and cloud services.
The grey area here could perhaps be Meta’s own Llama 3 Acceptable Use Policy, in which, Section 2 Clause ‘a’ states that it mustn’t be used for “Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State.”
To this, Meta clarified that the use of Llama 3 is very much on the agenda to “streamline complicated logistics and planning, track terrorist financing or strengthen our cyber defences.” But if you thought going to war was the only thing on Llama’s agenda, you’d be very wrong.
In the quarterly earnings call at the end of last month, Meta CEO Mark Zuckerberg made it clear that they’re “working with the public sector to adopt Llama across the US government.” There’s a lot more that’s coming. Not many of you may remember this (which is why the perspective), but Meta’s template with AI models is similar to what Microsoft did many years ago with the HoloLens augmented reality platform — made early inroads with the US government to have it integrated for a variety of use cases, particularly military deployment.
They had to, and the scale of Meta’s push was important. In August, OpenAI and Anthropic signed deals with the US government for research, testing and evaluation of their AI models. That’s the first step, before eventual, inevitable adoption.
For governments globally, AI can be useful in a variety of resource and time-intensive implementations — expanding the scope of healthcare, monitoring infrastructure, handling large swathes of data and cyber security tasks, being some. Analysts at EY, in the latest ‘AIdea of India’ report this summer, noted that “technology is poised to usher in an era of efficiency, innovation, and improved citizen engagement, ultimately leading to a more responsive and effective government.” Of this, AI will play an important role in streamlining processes.
In India, the government has started on an AI journey. Bhashini, an AI translation system by the Digital India Corporation, is an example. The IUDX Program, a collaboration that brings the ministry of housing and urban affairs, the ministry of electronics and information technology, and the Indian Institute of Science (IISc), Bengaluru together, is using data models for analysis and insights to improve urban governance and service delivery. This is just the start, a lot more is to come.
Globally, AI and intelligent models are already at play in many things we interface with. Just that they haven’t been branded as AI-powered, yet. How do you think those seamless (and often convenient; that’s the selling point) facial recognition scans at airports work? The Delhi government is believed to be planning an AI overlay to the traffic monitoring system in Delhi, to improve fines for breaking the rules and supposedly using machine learning models to monitor and predict traffic volumes at certain important locations in the city. The data-driven Intelligent Traffic Management System, or ITMS, is expected to go live in the coming months. This is just an example. Macro and micro implementation of AI and algorithm-focused tech, doesn’t have a template, a one-size-fits-all — it’ll have to fulfil a localised need, while inevitably plugging into a larger need.
Coming back to Meta, as I close this week’s thoughts. They aren’t done yet. A few days ago, Zuckerberg said that the company’s next model, expected to be called Llama 4, is being trained on a cluster of GPUs (or graphics processing units, computing hardware) that is “bigger than anything” used for any models till now. Apparently, this cluster is bigger than 100,000 of Nvidia’s H100 Tensor Core GPUs, each of which costs around $25,000. This cluster is significantly larger than the 25,000 H100 GPUs used to develop Llama 3.
The next stage of these AI battles could very well be one for governments vying for superiority with artificial intelligence. AI companies will have a prime role in that battlefront. That, when it comes in totality, will have its own sets of advantages. And challenges.
Vishal Mathur is the technology editor for Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice-versa. The views expressed are personal.

en_USEnglish