As Data Privacy Day 2026 approaches, organizations face an inflection point in privacy, artificial intelligence, and cybersecurity compliance. The pace of technological adoption, in particular AI tools, continues to outstrip legal, governance, and risk frameworks. At the same time, regulators, plaintiffs, and businesses are increasingly focused on how data is collected, used, monitored, and safeguarded.

Below are our Top 10 Privacy, AI, and Cybersecurity Issues for 2026.

1. AI Governance Becomes Operational and Enforceable

AI governance in 2026 will be judged less by aspirational principles and more by documented processes, controls, and accountability. Organizations using AI for recruiting, managing performance, improving efficiency and security, and creating content, among a myriad of other use cases, will be expected to demonstrate how AI systems are developed, deployed, and governed, considering a global patchwork of existing and emerging laws and regulations affecting AI and related technologies.

Action items for 2026:

  • Maintain an enterprise AI inventory, including shadow or embedded AI features.
  • Classify AI systems by risk and use case (HR, monitoring, security, consumer-facing)
  • Establish cross-functional AI governance (legal, privacy/infosec, HR, marketing, finance, operations)
  • Implement documentation and review processes for high-risk AI systems.

Learn More:

2. AI-Driven Workplace Monitoring Under Scrutiny

AI-enabled monitoring tools (dashcams, performance management solutions, wearables, etc.) are increasingly used to track productivity, behavior, communications, and engagement. These tools raise heightened concerns around employee privacy, fairness, transparency, and proportionality, especially when AI generates insights or scores that influence employment decisions.

Regulators and plaintiffs are paying closer attention to whether monitoring is over-collection by design, and whether AI outputs are explainable and defensible.

Action items for 2026:

  • Audit existing monitoring and productivity tools for AI functionality.
  • Assess whether monitoring practices align with data minimization principles.
  • Update employee notices and policies to clearly explain AI-driven monitoring.
  • Ensure human review and appeal mechanisms for AI-influenced decisions.

Learn More:

3. Biometrics Expand and So Does Legal Exposure

Biometric data collection continues to expand beyond fingerprints and facial recognition to include voiceprints, behavioral identifiers, and AI-derived biometric inferences. Litigation under Illinois’ Biometric Information Privacy Act (BIPA) remains active, but risk is spreading through broader definitions of sensitive data in state privacy laws.

Action items for 2026:

  • Identify all biometric and biometric-adjacent data collected directly or indirectly.
  • Review vendor tools to ensure compliance.
  • Update biometric notices, consent processes, and retention schedules.
  • Align biometric compliance efforts with broader privacy programs.

Learn More:

4. CIPA Litigation and Website Tracking Technologies Continue to Evolve

California Invasion of Privacy Act (CIPA) litigation related to session replay tools, chat features, analytics platforms, and tracking pixels remains a major risk area, even as legal theories evolve. AI-enhanced tracking tools that capture richer interactions only heighten exposure. Organizations often underestimate the privacy implications of seemingly routine website and chatbot technologies.

Action items for 2026:

  • Conduct a comprehensive audit of website and app tracking technologies.
  • Reassess consent banners, disclosures, and opt-out mechanisms.
  • Evaluate AI-enabled chatbots and analytics for interception risks.
  • Monitor litigation trends and adjust risk tolerance accordingly.

Learn More:

5. State Comprehensive Privacy Laws Enter an Implementation and Enforcement Phase

Organizations are no longer preparing for state privacy laws, but they are living under them. The California Consumer Privacy Act (CCPA), along with other state laws, imposes increasing operational obligations.

California’s risk assessment requirements, cybersecurity audit mandates, and automated decision-making technology (ADMT) regulations represent a significant shift toward proactive compliance.

Action items for 2026:

  • Comply with annual review and update requirements.
  • Conduct CCPA-mandated risk assessments for high-risk processing.
  • Prepare for cybersecurity audit obligations and documentation expectations.
  • Inventory and assess ADMT used in employment, monitoring, and consumer contexts.

Learn More:

6. Data Minimization Becomes One of the Most Challenging Compliance Obligations

Data minimization has moved from an abstract compliance principle to a central operational challenge. Modern AI systems, monitoring tools, and security platforms are frequently architected to collect and retain expansive datasets by default, even when narrower data sets would suffice. This design approach increasingly conflicts with legal obligations that require organizations to limit data collection to what is necessary, proportionate, and purpose-specific, not only in terms of retention, but at the point of collection itself. As regulatory scrutiny intensifies, organizations must be prepared to explain why specific categories of data were collected, how those decisions align with defined business purposes, and whether less intrusive alternatives were reasonably available.

Action items for 2026:

  • Reassess data collection across AI, HR, and security systems.
  • Implement retention limits and transfer restrictions tied to business necessity and legal risk.
  • Challenge “collect now, justify later” deployments that rely on large-scale or continuous data exports.
  • Integrate data minimization and Bulk Data Transfer rule analysis into AI governance and system design reviews.

Learn More:

7. Importance of the DOJ Bulk Transfer Rule

In 2026, bulk sensitive data transfers are no longer a background compliance issue but a regulated risk category in their own right. Under the Department of Justice’s Bulk Data Transfer Rule, which took effect in 2025, organizations must closely assess whether large-scale transfers or access to U.S. sensitive personal or government-related data involve countries of concern or covered persons. The rule reaches a wide range of transactions, including vendor, employment, and service arrangements, and imposes affirmative obligations around due diligence, access controls, and ongoing monitoring.

Action items for 2026:

  • Update data mapping activities to include sensitive data collection and data storage.
  • Catalog where bulk data transfers occur, including transfers between internal systems, vendors, and cross-border environments. Develop a compliance program that includes due diligence steps, vendor agreement language, and internal access controls.
  • Evaluate the purpose of each bulk transfer.

Learn More:

8. UK and EU Data Protection Laws Reforms

Recent and proposed amendments to UK and EU data protection laws are designed to clarify or simplify compliance obligations for organizations, regardless of sector. Changes will impact both commercial and workplace data handling practices.   

UK: Data Use and Access Act (DUAA)

The UK has enacted the Data Use and Access Act, which amends key provisions of the UK General Data Protection Regulation (UK GDPR) and the Privacy and Electronic Communications Regulations (PECR). These reforms relate to subject access requests and complaints, automated processing, the lawful basis to process, cookies, direct marketing, and cross-border transfers, among others. Implementation is occurring in stages, with changes relating to subject access requests, complaints, and automated decision-making taking effect over the next few months.

EU: Digital Omnibus Regulation

The European Commission has proposed a Digital Omnibus Regulation, which introduces amendments to the EU General Data Protection Regulation. Proposed changes include redefining “personal data”, simplifying the personal data breach notification process, clarifying the data subject access process, and managing cookies.

Action items for 2026:

  • Review forthcoming guidance from the UK Information Commissioner’s Office.
    • Implement a data subject complaint process.
    • Review existing lawful bases and purposes for processing.
    • Prepare any necessary updates for employee training.
  • Monitor the progress of the proposed Digital Omnibus Regulation.
    • Review data inventories in the event the definition of personal data is revised.
    • Update data subject access response processes.
    • Review the use and nature of any cookies deployed on the organization’s website.

Learn More:

9. Vendor and Third-Party AI Risk Management Intensifies

Most organizations buy rather than build AI technologies. They buy from vendors such as recruiting platforms, notetaking tools, monitoring applications, cybersecurity providers, and analytics services—whose systems depend on large-scale data ingestion. From procurement to MSA negotiation to record retention obligations, novel and challenging issues as organizations seek to minimize third-party and fourth-party service provider risk. Importantly, vendor contracts have not kept pace with the nature of AI models or how to allocate risk.

Action items for 2026:

  • Update vendor diligence to include privacy, security, and AI-specific risk assessments.
  • Revise contracts to address AI training data, secondary use, audit rights, and allocation of liability.
  • Monitor downstream data sharing, model updates, and cross-border or large-scale data movements.

Learn More:

10. Privacy, AI, and Cybersecurity Fully Converge

In 2026, the lines between privacy, cybersecurity, and AI will continue to blur, leaving organizations that silo these disciplines to face increasing regulatory, litigation, and operational risk.

Action items for 2026:

  • Integrate privacy, AI governance, and cybersecurity leadership.
  • Harmonize risk assessments and reporting structures.
  • Align training and compliance messaging across functions.
  • Treating privacy and AI governance as enterprise risk issues.

Learn More:

As Data Privacy Day 2026 highlights, the challenge is no longer identifying emerging risks, but it is managing them at scale, across systems, and in real time. AI, biometrics, monitoring technologies, and expanding privacy laws demand a more mature, integrated approach to compliance and governance.

A blend of evolving judicial interpretation, aggressive plaintiffs’ counsel, and decades-old statutory language has brought new life to the Florida Security of Communications Act (FSCA) as a vehicle for challenging commonplace website technologies.

At its core, the FSCA was enactedto protect privacy by prohibiting the unauthorized interception of wire, oral, or electronic communications — with far stricter requirements than federal law. Unlike the federal Wiretap Act (which allows one-party consent), Florida typically requires all-party consent before recording or intercepting electronic communications. The FSCA also generally prohibits the interception of any wire, oral, or electronic communications, as well as the use and disclosure of unlawfully intercepted communications “knowing or having reason to know that the information was obtained through the interception of a wire, oral, or electronic communication.”

The New Wave of FSCA Claims

For plaintiffs, an attractive provision of the FSCA is that actual damages need not be established to recover for violations. Under the FSCA, a plaintiff can recover liquidated damages of at least $1,000 for violations without a showing of actual harm, as well as punitive damages and attorneys’ fees. One need only examine the explosion of litigation under other laws with similar damages provisions (e.g., the California Invasion of Privacy Act (CIPA), Telephone Consumer Protection Act (TCPA), Illinois Biometric Information Privacy Act (BIPA), the Illinois Genetic Information Privacy Act (GIPA)) to see this model in action.

For years, courts were reluctant to apply the FSCA to digital technologies like website trackers or analytics tools. Courts routinely dismissed early FSCA lawsuits targeting session-replay software and cookies—finding that these tools didn’t intercept the “contents” of communications in a manner the statute was meant to reach. See Jacome v. Spirit Airlines, Inc., No. 2021-000947-CA-01 (Fla. 11th Cir. Ct. June 17, 2021). This view may be shifting.

Recent cases suggest courts may be more open to digital wiretapping-type claims brought in Florida that previously indicated.

  • A nationwide class action pending in the Southern District of Florida, Cobbs v. PetMed Express, Inc.,  alleges that PedMed Express,  an online veterinary pharmacy, used embedded tracking technologies that enabled third-party companies to capture information about consumers’ prescription-related browsing and purchase activity  on its website.   The tracking tools allegedly intercepted URLs, search queries, and personally identifiable information such as email addresses and phone numbers.   This case highlights the growing litigation risks associated with embedded website tracking technologies – particularly when sensitive data such as prescription or health-related information is involved.
  • In Magenheim v. Nike, Inc., filed in December 2025  in the Southern District of Florida, the plaintiffs allege that Nike triggered undisclosed tracking technologies on visitors’ web browsers immediately upon visiting the website – before users could review privacy disclosures or provide consent – and even when users enabled Global Privacy Control (GPC) signals or selected do not share my data on the site.   This lawsuit seeks class certification to include all Florida visitors to Nike’s website over the past two years.  This case underscores the increasing litigation risk surrounding online privacy expectations and the handling of browser-based tracking data.
  • In a lawsuit filed against a large health system in Florida and pending before the U.S. District Court for the Middle District of Florida, the plaintiff, a patient of that health system, alleges that the hospital system embedded tracking technologies within its website and patient portal.   As plead in the putative class action,  the tracking tools allegedly intercepted patients’ online queries regarding symptoms, treatments and other health related content.   The FSCA claims and the federal Wiretap Act survived a motion to dismiss, inline with the growing trend of courts scrutinizing the use of tracking technologies – particularly in the health care context.

What Courts Are Grappling With

At the heart of these disputes are questions that courts nationwide are wrestling with:

  • What constitutes an “interception” under an analog-era statute when applied to digital data?
  • Do URLs, clicks, form inputs, and other web interactions qualify as the “contents” of communications protected by wiretapping laws?
  • When (and whether) consent provided via privacy notices or cookie banners is sufficient to defeat a statutory wiretapping claim?

Courts have reached different answers, leaving Florida business in limbo with the uncertainty driving increasing claims from plaintiffs.

What This Means for Your Business

Whether you operate a website, mobile app, or digital marketing campaign, the Florida FSCA litigation trend shows no signs of slowing. To mitigate risks and avoid becoming a target of wiretapping claims, consider the following practical steps:

1. Audit All Tracking Technologies

Inventory all third-party pixels, session-replay tools, analytics scripts, and email tracking. Understand what data they capture, when it’s transmitted, and what third parties receive it.

2. Reevaluate Your Consent Mechanisms

Passive privacy disclosures may not be enough. Use clear, affirmative consent mechanisms (e.g., click-to-accept banners) that disclose what is collected and how it is used before any tracking occurs.

3. Limit Data to What’s Necessary – Minimization

Where possible, restrict the capture of high-risk data (e.g., URLs revealing sensitive information or form content) and weigh whether aggressive tracking is essential for business purposes.

4. Update Privacy Policies and Terms

Make your data collection and sharing practices transparent and easily accessible. Regularly update legal disclosures to mirror how tools actually function.

5. Tighten Vendor Contracts

Ensure contracts with analytics, marketing, and tracking vendors allocate compliance responsibility and include indemnification clauses where appropriate.

6. Monitor Legal Developments

Florida’s legal landscape is shifting rapidly. Maintain awareness of new decisions and legislative changes that may clarify or expand FSCA applicability.

Conclusion

The surge of digital wiretapping claims under the Florida Security of Communications Act illustrates how old statutes can take on new life in an era of ubiquitous data collection. What once was a niche privacy theory now threatens to expose businesses — large and small — to class action exposure and costly litigation.

By understanding the evolving legal landscape and implementing proactive compliance strategies, companies can better safeguard their digital practices and reduce the risk of costly FSCA claims.

We’re pleased to announce the publication of a comprehensive resource on the Jackson Lewis website:

Navigating the California Consumer Privacy Act: 30+ Essential FAQs for Covered Businesses, Including Clarifying Regulations Effective 1.1.26.

With California’s updated CCPA regulations now in effect as of January 1, 2026, businesses face expanded compliance requirements in several critical areas. The FAQs summarize key provisions in the statute and regulations, providing straightforward analysis for some of the most pressing questions.

Specifically, the FAQs cover a range of issues from fundamental questions about which businesses are covered and what personal information is protected, to detailed explanations of the new requirements around automated decision-making technology (ADMT), risk assessments, and cybersecurity audits. Readers will find practical guidance on meeting their notice obligations, responding to consumer requests, and implementing the technical and organizational safeguards needed to protect personal information.

Whether you’re assessing your organization’s compliance status for the first time or updating your program to reflect the latest regulatory changes, these FAQs offer actionable insights to help navigate this complex landscape.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.

  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.

In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.

The Risk

AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.

Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.

Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Use Cases at Risk

  • Hospital workers going on rounds with their team equipped with AI glasses that access, capture, view, and record patients, charts, wounds, family members, in electronic format, triggering the HIPAA Security Rule and state law obligations
  • Financial services employees wearing AI glasses that capture customer financial data, account numbers, or investment information
  • Any workplace use involving personally identifiable information (PII), such as Social Security numbers, credit card data, or medical information, as well as confidential business of the company and/or its customers
  • Attorneys and legal professionals using AI glasses during privileged communications, potentially risking waiver of attorney-client privilege
  • Employees connecting AI glasses to unsecured or public Wi-Fi networks, creating man-in-the-middle attack risks
  • Lost or stolen AI glasses that store unencrypted audio, video, or contextual data

Why It Matters

Data breaches involving biometric data, health information, or financial data carry outsized legal and financial consequences. With AI glasses, as a practical matter, an entity generally is less likely to face a large-scale data breach affecting hundreds of thousands or millions of people. However, a breach and exposure of sensitive patient images, discussions, or other data captured with AI glasses could be just as, if not more, harmful to the reputation of a health system, for example, than an attack by a criminal threat actor. Beyond reputational harm, incident response costs, litigation, and regulatory penalties also remain a significant risk factor.

Shadow AI (the unauthorized use of artificial intelligence tools by employees in the workplace) also poses a potential data security, breach, and third-party risks. Many devices sync automatically to consumer cloud accounts with security practices that employers neither control nor audit. When an employee uses personal AI glasses for work, fundamental questions often go unanswered: Where is the data stored? Is it encrypted? Who has access? How long is it retained? Is it used to train AI models?

Finally, the use of AI glasses can diminish the effects of a powerful data security tool – data minimization. Businesses will need to grapple with the question whether the constant, ambient data collection and recording aligns with the principles of data minimization, a principle that is woven into data privacy laws, such as the California Consumer Privacy Act.

Practical Compliance Considerations

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Perform an assessment: Conduct security and privacy assessments of specific AI glasses models before deployment
  • Understand third-party service provider risks: Review security documentation, including encryption practices, access controls, and incident response commitments
  • Understand obligations to customers: Review services agreements concerning the collection, processing, and security obligations for handling customer personal and confidential business information
  • Update incident response plans: Factor in wearable device compromises
  • For HIPAA Covered Entities and Business Associates: Confirm that AI glasses meet HIPAA requirements
  • Evaluate cyber insurance coverage: Assess whether your policy (assuming you have a cyber policy!) covers breaches involving wearable technology and AI-related risks

Conclusion

AI smart glasses may feel futuristic and convenient, but from a data security and compliance perspective, they dramatically expand an organization’s attack surface. Without careful controls, these devices can quietly introduce breach risks, third-party data sharing, and regulatory exposure that outweigh their perceived benefits.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern their use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. In Part 1, we addressed compliance issues that arise when these wearables collect biometric information. In Part 2, we covered all-party consent requirements and AI notetaking technologies.

In this Part 3, we consider broader privacy and surveillance issues, including from a labor law perspective. Left uncontrolled, the nature and capabilities of AI smart glasses open the door to a range of circumstances in which legal requirements as well as societal norms could be violated, even inadvertently. At the same time, a pervasive surveillance environment fueled by the technologies such as AI smart glasses may spur arguments by some employees that their right to engage in protected concerted activity has been infringed.

The Risk

When employers provide AI glasses to employees or permit their use in the workplace, they can potentially create continuous and/or intrusive surveillance conditions that may violate the privacy rights of individuals they encounter, including employees, customers, and others. Various state statutory and common law limit surveillance, and new laws are emerging that would target workplace surveillance technologies. For example, California Assembly Bill 1331, introduced in early 2025, sought to limit employer surveillance and enhance employee privacy. The bill would have banned monitoring in private off-duty spaces (like bathrooms, lactation rooms) and prohibited surveillance of homes or personal vehicles. California Governor Newsom vetoed this bill in October.

However, other law in California, notably the California Consumer Privacy Act (CCPA), seeks to regulate surveillance that would involve certain personal information. Under the CCPA, continuous surveillance may trigger a risk assessment obligation. See more about that here. The CCPA and several other states that have adopted a comprehensive privacy law require covered entities to communicate about the personal information they collect from residents of those states. Covered entities that permit employees to use these devices in the course of their employment may nee to better understand the type of personal information those employees’ glasses are collecting.

The National Labor Relations Board (NLRB) generally establishes a right of employees to act with co-workers to address work-related issues. Widespread surveillance and recording could chill protected concerted activity – employees might be less likely to engage with other employees about working conditions under such circumstances. Of course, introducing AI glasses in the workplace may trigger an obligation to bargain under the NLRA.

Relevant Use Cases

  • Warehouse workers using AI glasses for inventory management that also track movement patterns, productivity metrics, and conversations of coworkers
  • School employees that use AI glasses while interacting with minor students in a range of circumstances
  • Field service technicians wearing glasses that record all customer interactions as well as communications with coworkers
  • Office workers using AI glasses with note-taking features during internal meetings, capturing discussions among employees
  • Healthcare workers in a variety of settings, purposefully or inadvertently, capturing images or data of patients and their families
  • Manufacturing employees whose glasses document work processes while also recording conversations with coworkers

Why It Matters:

Connecticut, Delaware, and New York require employers to notify employees of certain electronic monitoring. California’s CCPA gives employees specific rights over their personal information, including the right to know what’s collected and the right to deletion. These protections were strengthened in recently updated regulations under the California Privacy Rights Act which created, among other things, an obligation to conduct and report on risk assessments performed in connection with certain surveillance activities.

Union environments face additional scrutiny. Surveillance may constitute an unfair labor practice requiring collective bargaining. The NLRB has issued guidance limiting employers’ ability to ban workplace recordings because such bans can interfere with protected rights. However, continuous AI-powered surveillance could still create a chilling effect that violates labor law.

Practical Compliance Considerations:

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Provide notice: Providing written notice about AI glasses capabilities, including what data is collected, how it’s processed, and how it may be used.
  • Perform an assessment: Conduct privacy impact/risk assessments before deploying AI glasses in the workplace, including when interacting with customers.
  • Consider bargaining obligations, protected concerted activity rights: If deploying AI glasses in union environments, engage in collective bargaining about their use, assess PCA rights.
  • Establish technical limits and safeguard: Consider implementing technical controls like automatic disabling of recording in break rooms, bathrooms, and areas designated for private conversations.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

New York State’s 2025 legislative session marked a notable moment in the evolution of artificial intelligence (AI) and privacy regulation. Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, creating one of the first state-level frameworks aimed specifically at the most advanced AI systems, while vetoing the proposed New York Health Information Privacy Act (NYHIPA), a bill that would have significantly expanded health data protections beyond existing federal law. Together, these developments provide important signals for businesses operating in or touching New York.

The RAISE Act

The RAISE Act amends the General Business Law to impose transparency and risk-management obligations on developers of certain high-end AI systems. The law is narrowly focused on “frontier models,” defined by extraordinarily high computational thresholds, generally models trained with more than 10²⁶ computational operations and over $100 million in compute costs.

For most businesses, this means the law will primarily affect developers and deployers of the most powerful AI systems rather than everyday enterprise automation tools.

Practical examples of AI technologies that could fall within scope include:

  • Large language models such as GPT-4-class, Claude-class, or Gemini-class systems trained at a massive scale;
  • Generative AI systems capable of producing highly realistic video or audio content, including synthetic voices or deepfake-quality media;
  • Advanced medical or scientific AI tools, such as models used to support diagnostics, drug discovery, or large-scale biological simulations that require substantial computational resources.

Covered “large developers” must implement and publish a safety and security protocol (with limited redactions), assess whether deployment poses an unreasonable risk of “critical harm,” and report certain safety incidents to the New York Attorney General within 72 hours, in contrast to changes to data breach laws that took effect at the end of 2024.

 While the law does not create a private right of action, enforcement authority rests with the Attorney General, including significant civil penalties for violations.

The RAISE Act takes effect January 1, 2027.

For businesses that license or integrate frontier AI models from third parties, the RAISE Act is also relevant contractually. Vendors may pass through compliance obligations, audit rights, or usage restrictions as part of their efforts to meet statutory requirements.

Health Information Privacy Act Vetoed

Although NYHIPA was vetoed, its contents remain highly relevant, particularly for businesses in health, wellness, advertising, and AI-enabled consumer services. The bill would have applied broadly to any entity processing health-related information linked to a New York resident or someone physically present in the state, regardless of HIPAA status. This would have been a more expansive law than similar state health data laws in Washington and Nevada.

Key provisions included strict limits on processing health data without express authorization, detailed and standalone consent requirements, and explicit bans on consent practices that obscure or manipulate user decision-making. The bill would have excluded research, development, and marketing from “internal business operations”, meaning AI training or product improvement using health data could have required new authorization. Individuals would also have been granted robust access and deletion rights, including obligations to notify downstream service providers and third parties of deletion requests going back one year.

Takeaways for Businesses

Taken together, these developments reflect New York’s intent to play a leading role in AI and privacy governance. For businesses, the message is not one of immediate across-the-board compliance, but of strategic preparation.

Companies developing or deploying advanced AI should strengthen governance, documentation, and incident-response processes. Organizations handling health-adjacent data, especially data that falls outside of HIPAA, should continue monitoring legislative activity and assess whether existing consent flows, data uses, and vendor arrangements would withstand a future version of NYHIPA or similar state laws.

New York’s approach underscores a broader trend: even narrowly scoped laws can have a wide practical impact through contracts, product design, and risk management. Businesses that plan early will be best positioned as this regulatory landscape continues to evolve.

As artificial intelligence (AI) becomes more widely used in hiring and employment decisions, Illinois has taken a significant step to regulate how employers must inform workers about AI’s use. Effective January 1, 2026, House Bill 3773 amended the Illinois Human Rights Act (IHRA) to require, among other things, employer notice when AI influences or facilitates employment decisions. According to reporting from the National Federation of Independent Business, the Illinois Department of Human Rights (IDHR) discussed at a recent stakeholder meeting draft rules to implement the notification requirement. See Subpart J — Use of Artificial Intelligence in Employment.

When Notice Is Required — And When It Isn’t

Under draft Subpart J, notice would be required whenever an employer uses AI to influence or facilitate any “covered employment decision.” A covered employment decision means:

a decision with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.

The draft rules make clear that notice would be required regardless of whether the AI’s use has discriminatory effects — meaning even if the employer believes the technology is fair or unbiased, the notice obligation would still applies.

Examples that would trigger the notice requirement include:

  • Computer-based assessments, skills tests, or personality quizzes used to predict employee outcomes;
  • Resume screening or ranking by AI;
  • AI evaluation of facial expression, voice, or text in interviews;
  • Targeted job advertising driven by AI;
  • AI analysis of third-party data about workers or candidates.

Notice would not be required when an employer uses AI for general business tasks unrelated to influencing or facilitating covered employment decisions. For example:

  • Using AI to draft marketing content or internal reports;
  • Standard word processing, spreadsheets, firewalls, anti-spam systems, or other tools that do not infer, generate, or influence employment decisions as defined.  

When To Provide Notice

Timing matters, and the rules would distinguish between current and prospective employees:

  • For current employees, notice must be provided annually, and within 30 days after adopting or making substantial updates to an AI system used for covered decisions.
  • For prospective employees, as part of the job notice or posting.

These timing requirements aim to ensure transparency throughout the AI adoption lifecycle.

How Employers Must Provide Notice

The draft regulations specify multiple methods to maximize employee awareness and reduce the risk that workers or applicants miss the disclosure:

  • Inclusion in employee handbooks, manuals, or policy documents;
  • Posting in conspicuous physical locations where employer notices are typically displayed;
  • Posting on an employer’s intranet or external website where the employer customarily posts notices to prospective or current employees, including a conspicuous link on the homepage; and
  • Included with any job notice and posting

What the Notice Must Include

Subpart J’s draft content requirements for notice would go well beyond a simple “yes/no” that AI is used. Required elements would include:

  1. The AI system’s product name and, if applicable, developer or vendor;
  2. Which covered employment decisions the AI system influences or facilitates (e.g., hiring, discipline);
  3. The purpose of the AI system and the categories of personal information or employee data processed;
  4. The types of job positions the AI tool will be used for;
  5. A contact person — typically an HR representative — who can answer questions about the system and its use;
  6. How to request a reasonable accommodation related to the AI use; and
  7. Language from 775 ILCS 5/2-102(L) of the IL Human Rights Act.

Accessibility Requirements

Notably, the draft rules emphasize that notices must be accessible:

  • Plain language and a readable format;
  • Availability in languages commonly spoken by the employer’s workforce;
  • Reasonable accessibility for employees with disabilities.

This accessibility focus aligns with broader non-discrimination goals and reinforces meaningful notice beyond mere disclosure.

Context: Statute and Federal AI Policy

The notice requirement stems from Illinois’ 2024 amendments to the Human Rights Act in HB 3773, which added AI use to nondiscrimination protections and included a statutory notice mandate without detail — leaving specifics to IDHR regulations.

Other jurisdictions like Colorado and New York City also regulate AI and automated tools used in hiring — though Illinois’ approach stops short of mandatory bias audits or impact assessments.

At the federal level, the regulatory landscape is shifting. A December 2025 Executive Order (EO) titled Ensuring a National Policy Framework for Artificial Intelligence directs the U.S. Attorney General to establish an AI Litigation Task Force that will evaluate and potentially challenge state AI laws deemed “inconsistent” with federal policy.

Conclusion

Illinois’ draft Subpart J notice rules would establish a comprehensive, detailed disclosureframework for employers using AI in covered employment decisions — aiming for informed consent and transparency across the workforce.

However, with federal policy now pushing toward a national AI regime, state laws like Illinois’ may increasingly be scrutinized or even litigated in the coming years. Staying ahead of both state notice requirements and the evolving federal policy environment will be critical for employers using AI in hiring and workforce decisions.

As we explored in Part 1 of this series, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. Modern smart glasses blend high-resolution cameras, always-on microphones, and real-time AI assistants into a hands-free wearable that can capture, analyze, and even transcribe ambient information around the wearer. These features — from continuous audio capture to automated transcription — create scenarios where bystanders (co-workers, customers, etc.) may be recorded or have their conversations documented without ever knowing it, raising fundamental questions about consent and the boundaries of lawful observation.

Part 2 shifts focus to how these core capabilities intersect with consent requirements and note-taking practices under U.S. and state wiretapping and recording laws. In many jurisdictions, recording or transcribing a conversation without the express permission of all participants — particularly where devices can run discreetly in the background — can potentially trigger two-party (or all-party) consent obligations and potential statutory violations. Likewise, the promise of AI-assisted note taking — where every spoken word in a meeting could be saved, indexed, and shared — brings not just operational benefits but significant legal and business risk. Understanding how the unique sensing and recording features of smart glasses intersect with these consent and notetaking issues is essential for any organization contemplating deploy­­ment.

The Risk

AI glasses with continuous recording, AI note-taking, or voice transcription capabilities can easily violate state wiretapping laws. Twelve states require all parties to consent to audio recording of confidential communications, including California, Florida, Illinois, Maryland, Massachusetts, Connecticut, Montana, New Hampshire, Pennsylvania, and Washington. Even in one-party consent states, recording in locations where individuals have reasonable expectations of privacy violates surveillance laws. Going one step further, consider the possibility of the user being close enough to record a conversation between two unrelated persons.

The rise of AI note-taking capabilities in smart glasses makes this risk particularly acute. Unlike traditional recording that often requires deliberate action, AI glasses can passively capture and transcribe conversations throughout the day, creating permanent searchable records of discussions that participants never knew were being documented. Smart glasses that record continuously with no visible indicator, amplify this concern.

Relevant Use Cases

  • Sales representatives wearing AI glasses that automatically transcribe client meetings without explicit consent from all parties
  • Managers using glasses with AI note-taking features during performance reviews, disciplinary meetings, or interviews
  • Medical professionals recording patient consultations through smart glasses for AI-generated documentation
  • Employees wearing glasses during phone calls where the other party is in a two-party consent state
  • Anyone wearing recording-capable glasses in restrooms, locker rooms, medical facilities, or other areas with heightened privacy expectations
  • Workers using AI transcription features during confidential business discussions or trade secret conversations
  • OSHA inspectors using AI glasses (announced for expanded deployment in 2025) to record workplace inspections without proper protocols

Why It Matters

Violations of two-party consent laws carry criminal penalties, including potential jail time, as well as civil liability. The fact that many AI glasses lack obvious recording indicators—or have only tiny LED lights that are easily missed—compounds the risk. AI-generated transcripts created without consent or even awareness raise a myriad of issues, some of which are outlined here. The ease with which these devices could continuously record and transcribe conversations raises particular concerns relating to increasing emphasis and regulation directed at data minimization.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Implement clear policies: Develop clear policies about when and where AI glasses with recording capabilities can be worn
  • Get consent: Obtain explicit verbal or written consent from all parties before activating recording features—consent banners on video calls may not suffice for glasses
  • Provide notice: Provide visible notification that recording is occurring (though many AI glasses lack adequate indicators)
  • Establish technical limits and safeguard: Implement geofencing or technical controls to automatically disable recording features in prohibited areas
  • Monitor usage: Maintain detailed logs of when recording features are activated and by whom
  • Train users: Train employees on state-specific wiretapping laws, especially when traveling or conducting interstate communications
  • Increase awareness of device features and capabilities: For AI note-taking features, ensure participants know transcription is occurring and can opt out
  • Leverage existing policies: Apply existing privacy and security controls, such as access and retention, relating to transcripts generated from the wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

Following failed congressional attempts to limit state AI laws, on December 11, 2025, the President issued an Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence. The Order represents federal intervention into the growing landscape of state-level AI regulation. According to the Administration, a patchwork of state laws has created inconsistent and burdensome compliance obligations, particularly for startups and organizations operating across multiple jurisdictions. The Order claims that certain current state AI laws not only restrict innovation but could also force AI developers to incorporate “ideological bias.”

The EO provides the following example:

a new Colorado law banning “algorithmic discrimination” may even force AI models to produce false results in order to avoid a “differential treatment or impact” on protected groups.

To address these concerns, the Executive Order establishes a new AI Litigation Task Force within the Department of Justice. This group is charged with challenging state AI laws that conflict with the federal policy of promoting minimally burdensome, innovation-focused AI governance.

The Administration anticipates litigation against states whose laws it believes unconstitutionally regulate interstate commerce, impose unlawfully compelled speech, or require model outputs to be modified in ways that conflict with federal law. Within 90 days, the Department of Commerce must also publish a public evaluation identifying specific state laws considered “onerous” or inconsistent with the national policy framework—the Colorado AI Act and California Consumer Privacy Act’s ADMT Regulations will very likely make the list, including those that require disclosures or reporting obligations the Administration argues may infringe the First Amendment.

The Order further ties compliance with federal AI policy to federal funding. States that maintain AI laws deemed inconsistent with federal objectives may become ineligible for certain Broadband Equity, Access, and Deployment (BEAD) funds, and federal agencies are directed to explore conditioning other discretionary grants on a state’s willingness to refrain from enforcing its AI regulations during funding periods. This introduces a significant financial dimension to federal-state tensions and may influence how aggressively states choose to regulate AI going forward.

In addition, the Order directs federal agencies to begin steps that lay the groundwork for federal preemption. The Federal Communications Commission must consider creating a national reporting and disclosure standard that would override conflicting state requirements, while the Federal Trade Commission is instructed to clarify that state laws compelling alterations to truthful AI outputs may be preempted under federal prohibitions on deceptive practices. These efforts suggest a shift toward a unified federal approach that could substantially reshape or displace existing state obligations.

The effects of the EO remain uncertain. Organizations have been grappling with a rapid proliferation of state AI laws governing areas such as notice, transparency, nondiscrimination, fairness, safety, accuracy, and vendor management stemming from automated decision-making. For covered organizations, these AI developments also intersect with long-standing civil rights laws, like Title VII and similar state laws, and well-established guardrails to prevent employment discrimination, like the Uniform Guidelines on Employee Selection Procedures, which continue to shape how AI-enabled selection tools must be assessed for compliance. 

If federal litigation succeeds or preemptive standards emerge, some existing obligations may shrink or change. At the same time, organizations should expect a period of regulatory instability as states and the federal government contest the limits of their respective authority. Organizations that have invested heavily in state-specific compliance frameworks may need to revisit or revise them, while AI developers could face shifting expectations around disclosure, output modification, and fairness-related requirements.

The Executive Order also directs federal advisors to prepare legislative recommendations for a uniform federal AI framework. Although the Administration proposes broad federal preemption, it indicates that certain topics—such as child safety protections and state AI procurement rules—should remain within state authority. This signals a coming debate in Congress over how much room states should retain to regulate AI-related issues.

Finally, the Order is almost certain to face legal challenges from states, which may argue that the Administration is exceeding its authority, infringing on state sovereignty, or coercively attaching conditions to federal funding. Litigation could take years to resolve, leaving covered organizations to navigate an evolving legal environment where both federal and state rules remain in flux—underscoring the importance of developing AI governance approaches that are flexible, regularly revisited, and attentive to how AI tools interact with existing employment discrimination laws and privacy requirements, for example. The bottom line is that the Executive Order marks the beginning of an aggressive federal push to standardize AI regulation nationwide, with substantial consequences for compliance, risk management, and future governance. Covered organizations should monitor developments closely and prepare for a shifting regulatory landscape.

Smart glasses with AI capabilities have evolved from futuristic concept to everyday reality. The market exploded in 2024, with global smart glasses shipments surging 210% year-over-year, driven primarily by Meta’s Ray-Ban smart glasses. From the consumer-focused Meta Ray-Ban Display (featuring a built-in heads-up display announced in September 2025) to Meta’s partnership with Oakley for athletic glasses, enterprise solutions like RealWear and Vuzix for industrial use, and developer-focused options like Brilliant Labs’ Frame glasses, these devices promise to revolutionize how we interact with the world.

But with innovation comes risk. Modern AI glasses can record video and audio, process conversations in real-time with AI assistants, perform visual analysis of everything you see, generate meeting summaries, create searchable transcripts, and transmit data to cloud servers—often without obvious visual indicators. For businesses deploying these technologies and individuals using them in professional settings, the compliance landscape is treacherous.

In Part 1 of this series, we address biometric data collection.

The Risk

AI glasses increasingly incorporate biometric data collection capabilities that trigger strict privacy regulations. This includes facial recognition through camera feeds, voiceprint capture through AI transcription (see upcoming Part 2 in this series for AI specific risks), eye tracking and gaze analysis, and even the processing of images that could be used to identify individuals. Under laws like California’s Consumer Privacy Act (CCPA), Illinois’ Biometric Information Privacy Act (BIPA), and the EU’s General Data Protection Regulation (GDPR), biometric data receives heightened protection.

The 2024 Charlotte Tilbury settlement established that virtual try-on features using facial geometry may constitute biometric data collection under BIPA, potentially requiring separate notifications and annual consent reaffirmation. This and other precedents extend directly to AI glasses that process visual and audio data that can constitute biometric information.

Relevant Use Cases

  • Retail employees using AI glasses that analyze customer faces or body language for personalized service recommendations
  • Security personnel deploying glasses with facial recognition capabilities for identification
  • Healthcare providers using glasses that process patient images, potentially capturing biometric identifiers
  • Any workplace use where AI processes images or voices of employees, customers, or the public
  • Industrial workers whose AI glasses capture and analyze faces or voices of colleagues during recorded training sessions

Why It Matters

BIPA provides for statutory damages of $1,000 to $5,000 per violation, along with attorneys’ fees. Following the Illinois Supreme Court’s 2023 Cothron decision, each scan or transmission could constitute a separate violation—though a 2024 amendment limited this to one violation per person per collection method. The $51.75 million Clearview AI settlement in 2025 demonstrates the scale of exposure: with biometric data from millions of individuals, companies face bankruptcy-level liability.

While BIPA may be the most popular of the biometric laws in the United States, it certainly is not the only one. Measures to regulate the collection, use, and disclosure of biometric information exist in states such as California, Colorado, Texas, and Washington, as well as several cities including New York City and Portland OR.

For a summary of these requirements, see our Biometrics white paper.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Address Applicable Notice, Consent, and Policy Requirements: Organizations may need to create detailed, written policies governing when, where, and how AI glasses may be used. Address recording features, AI processing, data transmission, and specify prohibited uses. Include clear guidance on consumer versus enterprise devices. And, of course, consider applicable notice, consent, and record retention policies.
  • Conduct Privacy Impact Assessments: Before deploying AI glasses, evaluate privacy risks specific to your industry, geography, and use cases. Consider biometric data collection, workplace surveillance, third-party AI processing, and cross-border data transfers. Note such risk assessments may be required, see here and here.
  • Implement Technical Controls: Use device management solutions to control which features can be activated in which locations. Consider geofencing to automatically disable recording in sensitive areas like bathrooms, break rooms, confidential meeting spaces, and healthcare facilities.
  • Vet Vendors and AI Services: Understand where data goes, who processes it, how long it’s retained, what security controls exist, and whether vendors will sign appropriate agreements (BAAs for HIPAA, DPAs for GDPR, etc.). Negotiate contracts that protect your organization and comply with your obligations.
  • Train Rigorously: Ensure all users understand the legal implications of AI glasses, including consent requirements, prohibited uses, data handling obligations, and discovery implications. Training should be role-specific and regularly updated.
  • Monitor Regulatory Developments: Regulation is evolving rapidly concerning biometrics, as well as AI tools that leverage that information for additional capabilities. The EU AI Act took effect in 2024, California increased its AI-regulatory environment in 2024-2025, and federal AI legislation is under consideration. State workplace surveillance laws are proliferating. Stay current with legal developments.
  • Establish Clear Lines of Responsibility: Designate who is responsible for AI glasses compliance, including legal review, privacy assessment, security controls, HR considerations, policy enforcement, and incident response.
  • Consult Legal Counsel: Given the complexity and variability of the regulatory environment, work with attorneys familiar with privacy, employment, biometric, and AI regulations before rolling out these wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

Organizations that fail to address these compliance concerns face not just regulatory penalties, but class action litigation (BIPA damages alone can reach millions), reputational harm, loss of customer trust, and the erosion of employee confidence.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

test