⚡ TL;DR
- Mountain View, California just turned off all 30 of its Flock Safety license plate cameras. Why? They discovered that hundreds of law enforcement agencies had been searching their data for over a year without permission.
- Federal agencies in Kentucky, Tennessee, Virginia, Ohio, and Nevada all accessed the system. The city's police chief said he "no longer has confidence" in the vendor.
- This wasn't a hack. No one broke in. The vendor simply flipped a switch that allowed nationwide access, didn't tell the city, and didn't keep records of who searched what.
- Think of it this way: in a hack, someone picks your lock. In this case, the locksmith made copies of your key and handed them out without asking.
- This story applies to every tool you trust with sensitive data: password managers, cloud storage, VPNs, and yes, surveillance systems. "Trust us" isn't a security model. It's a marketing strategy.
What Actually Happened
Mountain View installed 30 Flock Safety automatic license plate reader (ALPR) cameras starting in August 2024. The city worked closely with Flock to design strict data-sharing protocols. The deal was clear: no out-of-state access, no federal access, and every agency that could search the data had to be explicitly approved by the police chief.
That's not what happened.
A self-initiated audit, triggered by a public records request from the Mountain View Voice, revealed two major failures:
Failure #1: National access was enabled without permission.
From August to November 2024, a "nationwide lookup" setting was turned on for one camera. This allowed federal agencies across the country to search Mountain View's license plate data. Agencies that accessed the system included:
| Agency | Location |
|---|---|
| ATF Office | Kentucky |
| ATF Office | Nashville, TN |
| Langley Air Force Base | Virginia |
| U.S. GSA Office of Inspector General | Federal |
| Lake Mead National Recreation Area | Nevada |
| Air Force Base | Ohio |
Flock Safety did not notify Mountain View when the setting was turned on. They did not notify the city when it was turned off. And according to the police department, Flock did not retain records for that four-month period, meaning there's no way to determine what data was actually accessed.
Failure #2: Statewide access was enabled for 17 months.
Even worse, a "statewide lookup" setting was active on 29 of 30 cameras from the moment they were installed until January 5, 2026, when the police department discovered and disabled it. This meant any California law enforcement agency that opted into statewide lookup could search Mountain View's ALPR data without approval.
The Mountain View Voice investigation found that more than 250 California agencies searched the city's data without authorization.
The police department called it "a system failure on Flock Safety's part."
Police Chief Mike Canfield's response was blunt: "I personally no longer have confidence in this particular vendor."
All 30 cameras are now disabled. The City Council will decide their fate on February 24.
The Real Story: When Your Security Vendor Becomes the Vulnerability
Here's where this matters beyond Mountain View.
The city did everything right. They designed strict protocols, got written assurances, and communicated them to the public. But they were undone by a fundamental misunderstanding of modern risk.
We spend our security budgets worrying about external breaches. Hackers. Nation-state actors. Zero-day exploits. But the Mountain View case proves that software misconfiguration is the silent killer of privacy.
The Hidden Threat: Misconfiguration vs. Breach
To understand why the Flock Safety incident is so alarming, we have to look at how it differs from a traditional hack.
| Factor | External Breach (e.g., Substack) | Software Misconfiguration (e.g., Flock) |
|---|---|---|
| The Actor | An outside criminal or state actor | The vendor's own software or employees |
| The Door | Forced open through an exploit or stolen key | Left wide open by a toggle switch or default setting |
| Visibility | Usually triggers alarms or unusual activity alerts | Looks like normal system behavior |
| The Records | Hackers try to delete logs to hide their tracks | The vendor may not even generate logs for "authorized" access |
| The Liability | The vendor is a victim of a crime | The vendor is the architect of the failure |
In plain English: When Substack got hacked, criminals broke in and stole data. That's bad, but at least it's clear who's at fault. When Flock Safety's settings exposed Mountain View's data, the system was doing exactly what the vendor programmed it to do. The vendor just programmed it wrong, or programmed it to prioritize their business goals over the customer's privacy requirements.
That's the difference between a burglar and a negligent landlord. Both leave you exposed. Only one is a crime.
In a breach, the system is working as intended but is defeated by an attacker. In Mountain View's case, the system was working exactly as the vendor programmed it. It just happened to be programmed in direct opposition to the customer's legal and ethical requirements.
In the cloud era, a toggle switch is as dangerous as a zero-day exploit.
The Vendor's Response Makes It Worse
Here's what makes this case particularly damaging: Flock Safety's public position is that "sharing settings are always under the control of the agency."
But Mountain View discovered the nationwide setting was enabled without their permission or knowledge. When they asked for access logs from the period when federal agencies were searching their data, Flock Safety said they did not retain those records.
So the vendor claims agencies control their own settings, but the agency says they never enabled nationwide sharing. And the records that would prove who's telling the truth don't exist.
That's not a breach. That's a governance failure. It means the vendor's internal product goals (ease of sharing across jurisdictions) overrode the customer's security goals (data sovereignty and access control).
The Legal Angle: This May Have Broken California Law
This isn't just a policy mistake. It might be illegal.
California has a law called Senate Bill 34 (passed in 2014) that says police agencies cannot share license plate camera data with out-of-state or federal agencies unless they have a court order from a California judge.
Mountain View's data was accessed by federal agencies in Kentucky, Tennessee, Virginia, Ohio, and Nevada. Unless each of those searches had a California court order behind it (which seems unlikely for routine lookups), the city may have been breaking state law for 17 months without knowing it.
The state attorney general has already sued the city of El Cajon for allegedly sharing Flock data with out-of-state agencies. Mountain View could be next.
This is the "Trust Us" problem with legal consequences.
The Pattern You Should Recognize
Mountain View Mayor Emily Ann Ramos put it plainly: "We were given a lot of assurances that we would have control over our data and who gets access to it, and it definitely would not be used by anyone in the federal government, and that clearly wasn't the case."
Sound familiar?
It should. This is the same pattern we saw with LastPass, where users were assured their vaults were secure until attackers stole encrypted vault data and the company took months to fully disclose what happened. It's the same pattern we saw with Substack, where a breach went undetected for four months before users were notified.
But here's what makes the Flock case different and arguably worse: in a breach, someone broke in. In a misconfiguration, the vendor essentially left the back door open and invited the neighbors over.
The pattern is clear: if a feature exists to share your data, it will eventually be toggled "ON" unless you have a way to verify it is "OFF."
The Mountain View case is notable because the police chief actually took action: he shut down the entire system and publicly stated he lost confidence in the vendor. That's rare. Most organizations quietly accept the apology and keep using the tool.
The 5 Questions You Should Ask Any Security Vendor
Whether you're a city buying cameras, a business picking a password manager, or a regular person choosing a VPN, these are the questions that matter. The Mountain View case shows what happens when they don't get asked, or when the answers aren't checked.
1. Who can see my data, and how do I know for sure?
Mountain View thought they had strict access controls. They didn't. Settings were changed without their knowledge.
What to ask: Don't just ask who has access. Ask how you can check for yourself. Is there a dashboard? Can you see a list of everyone who's looked at your data? If the answer is "trust us," that's not good enough.
2. Will you tell me when settings change?
Flock Safety turned on nationwide sharing without telling Mountain View. They turned it off without telling them. The city only found out months later through their own investigation.
What to ask: If something changes in how my data is shared or accessed, will I be notified? Automatically? In writing? If not, how am I supposed to know?
3. Do you keep records of who accessed my data?
When Mountain View asked for logs from the period when federal agencies were searching their system, Flock said they didn't have them. Four months of access, and no records exist.
What to ask: What records do you keep about who looks at my data? How long do you keep them? Can I see them if I ask?
4. Has anyone independent checked your security?
Mountain View relied on Flock's promises. Those promises turned out to be wrong. An outside audit might have caught the problems before they caused harm.
What to ask: Has your system been audited by someone who doesn't work for you? Can I see the results? What problems did they find, and did you fix them?
5. If something goes wrong, when will you tell me?
Flock Safety didn't tell Mountain View about the access issues. The city discovered them on their own.
What to ask: If there's a breach or unauthorized access, how quickly will you tell me? Hours? Days? Months? Is there a legal requirement, or is it up to you?
Why This Matters for Your Personal Security
You might be thinking: I don't manage a city surveillance system. Why should I care?
Because you use services that work exactly the same way.
Your password manager stores every password you have. You trust the company to keep it locked down. But if they have a setting somewhere that shares data with "partners" or enables access for "support purposes," how would you know if it got switched on?
Your cloud storage (Google Drive, Dropbox, iCloud) holds your documents, photos, and files. You trust their access controls. But have you ever actually checked who can see your stuff?
Your VPN routes all your internet traffic through their servers. They promise not to log your activity. But "promise" and "proof" are different things.
The Mountain View case is a $150,000 surveillance system managed by a police department with lawyers, oversight committees, and explicit written agreements, and they still got burned by a vendor who didn't honor the deal.
If a police department with a legal team can't verify that their vendor is following the rules, what chance do regular people have?
The answer is: you have to ask the questions anyway. And you have to find ways to verify the answers, not just accept them.
Verify, Don't Trust
Every security tool requires trust. But trust without verification is just hope. When choosing a password manager, VPN, or any security service, ask the hard questions and verify the answers.
I use NordPass for password management. Independent audits. Transparent security practices.
Affiliate link. I may earn a commission at no extra cost to you.
The "Trust Us" Era Is Over
Mountain View's police chief made a statement that should be pinned to the wall of every security vendor's office:
"No matter how useful a technology may be, the most important tool for public safety is the trust of the community."
He's right. And he's not just talking about police technology.
Every security tool, from password managers to VPNs to encrypted messaging apps, runs on trust. Users trust that the tool does what it claims. They trust that access controls work. They trust that breaches will be disclosed.
When that trust is violated, and the vendor's response is "we're working on it" while refusing to explain what happened, the relationship is broken.
Mountain View responded by shutting down the cameras. LastPass users responded by migrating to competitors. Substack users are changing passwords and enabling 2FA.
The lesson is the same in every case: verify, don't trust.
What You Should Do Today
If you're picking a new security tool or service, run through the five questions above. If you can't get clear answers, that's a warning sign.
If you already use tools that store sensitive data (and you do, even if you don't think about it), here's a simple checklist:
1. Check what devices are logged in.
Most password managers and cloud services show you a list of devices that have access to your account. Look at that list. Remove anything you don't recognize or don't use anymore.
2. Check your sharing settings.
Look at who has access to your files, folders, or passwords. Did you share something with someone a year ago and forget about it? Revoke access you don't need.
3. Turn on activity notifications if available.
Some services will email you when someone logs in from a new device or location. Turn that on. It's an early warning system.
4. Google your vendor's name plus "breach."
Before trusting a company with your data, do a quick search. Have they been breached before? How did they handle it? Companies that hide problems or take months to disclose them will probably do the same thing again.
5. Know how to leave.
Can you export your data and delete your account if you need to? If there's no easy way to take your data with you, that's a red flag. It means the vendor is making it hard for you to leave, which usually means they're not confident you'd stay if you had a choice.
The Bottom Line
Mountain View's license plate camera story is about surveillance. But the lesson isn't about surveillance.
The lesson is simple: "Trust us" is not a security model.
It doesn't matter if the tool is a city camera network or your personal password manager. The questions are the same:
- Who can see my data?
- How do I verify that?
- What happens when something goes wrong?
Mountain View's police chief shut down the cameras because he lost confidence in the vendor. He did the math: no tool is worth using if you can't trust the company behind it.
The question for you is: do you actually know whether your confidence in your tools is justified? Or are you just assuming everything is fine because no one has told you otherwise?
Mountain View assumed everything was fine for 17 months. They were wrong.
Don't make the same mistake.
Frequently Asked Questions
What is an ALPR camera?
An automatic license plate reader (ALPR) is a camera system that captures images of passing vehicles, including license plate numbers, make, model, and sometimes color. The data is stored in a database and can be searched by law enforcement to locate stolen vehicles, track suspects, or investigate crimes. ALPR networks can contain millions of records and are often shared across jurisdictions.
What happened in the Mountain View Flock Safety breach?
Mountain View's police department discovered that hundreds of unauthorized law enforcement agencies had been searching their ALPR data for over a year. A "nationwide lookup" setting was enabled without permission from August to November 2024, allowing federal agencies to access the data. A "statewide lookup" setting was active for 17 months, allowing over 250 California agencies to search the data without authorization. The police chief has disabled all 30 cameras and said he no longer has confidence in Flock Safety.
Is this a data breach or something else?
This is a misconfiguration, not a traditional breach. No outside attacker broke in. Instead, the vendor enabled data-sharing settings that violated the city's policies, without notifying the city and without retaining access logs. In some ways, this is more concerning than a breach because the system was working exactly as the vendor programmed it, just in opposition to the customer's requirements.
Did Mountain View break California law?
Potentially. California Senate Bill 34 (2014) prohibits police agencies from sharing ALPR data with out-of-state or federal agencies without a California court order or warrant. Federal agencies in Kentucky, Tennessee, Virginia, Ohio, and Nevada accessed Mountain View's data. If those searches occurred without court orders, the city may have been in violation of state law for the duration of the unauthorized access.
Is Flock Safety safe to use?
This incident raises serious questions about Flock Safety's data access controls and transparency practices. The company enabled settings that violated the city's data-sharing agreement without notification, and reportedly did not retain access logs for the period when federal agencies were searching the system. Whether Flock Safety is "safe" depends on your specific requirements and risk tolerance, but the Mountain View case demonstrates that vendor assurances should be independently verified.
Can this happen with other security tools like password managers?
Yes. The pattern of vendor promises, failed controls, delayed disclosure, and user harm applies to any security tool. Password managers, cloud storage, VPNs, and other services all require you to trust the vendor's implementation of security controls. The Mountain View case is a reminder to verify those controls rather than accepting assurances at face value.
What should I do if I use Flock Safety or similar surveillance technology?
If you're a government agency or organization using Flock Safety or similar ALPR technology, conduct an immediate audit of your data-sharing settings. Verify that access controls match your policy. Request access logs from the vendor. If you cannot verify that your data-sharing agreements are being honored, consider disabling the system until you can.
How do I know if a security vendor is trustworthy?
Look for independent security audits with published results, clear breach disclosure policies, transparent access logging, and a track record of proactive communication about security issues. If a vendor's primary response to security questions is "trust us" without verifiable evidence, that's a warning sign.
Sources
- Mountain View Voice: "Amid immigration crackdown, Mountain View discovers unauthorized access to license plate data" (January 30, 2026)
- Mountain View Voice: "Mountain View police turn off license plate cameras after data sharing breach" (February 2, 2026)
- CBS San Francisco: "Northern California police chief suspends use of ALPR cameras after outside agencies access data" (February 3, 2026)
- San Jose Spotlight: "Mountain View police turn off license plate cameras after breach" (February 3, 2026)
- Mercury News: "Mountain View police say feds accessed license-plate data without permission" (February 2, 2026)
- The Cyber Express: "Flock Safety ALPR Cameras Shut Down Over Data Access Issue" (February 4, 2026)
T.O. Mercer is a cybersecurity researcher with over 10 years of experience in enterprise security and DevSecOps. He is the founder of SafePasswordGenerator.net, where he helps people take control of their online security with free tools and no-nonsense education.