Beyond Standards Archives - IEEE Standards Association Thu, 14 May 2026 17:54:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://standards.ieee.org/wp-content/uploads/2024/10/android-chrome-512x512-1-150x150.png Beyond Standards Archives - IEEE Standards Association 32 32 Beyond the Panic Button: 5 Surprising Realities of the Aging Challenge https://standards.ieee.org/beyond-standards/beyond-the-panic-button-5-surprising-realities-of-the-aging-challenge/ https://standards.ieee.org/beyond-standards/beyond-the-panic-button-5-surprising-realities-of-the-aging-challenge/#respond Thu, 14 May 2026 17:54:31 +0000 https://standards.ieee.org/?p=36121 Regardless of where we are in our current life cycle, there is a universal truth: we are all aging. For much of human history, life was brief and fragile; in Imperial Rome, most did not live past age 25, and as recently as 1900, the average life expectancy was only 47 years. Today, we have gained more longevity in the last century than in the previous five millennia, pushing the average to 77 and beyond.

The post Beyond the Panic Button: 5 Surprising Realities of the Aging Challenge appeared first on IEEE Standards Association.

]]>

Regardless of where we are in our current life cycle, there is a universal truth: we are all aging. For much of human history, life was brief and fragile; in Imperial Rome, most did not live past age 25, and as recently as 1900, the average life expectancy was only 47 years. Today, we have gained more longevity in the last century than in the previous five millennia, pushing the average to 77 and beyond. This shift necessitates a “smarter” approach to aging, as revealed in the recent IEEE AgeTech Initiative Webinar, Rethinking Aging: Smarter Homes, Smarter Bodies, Longer Lives broadcasted on 21 April 2026.

The two most powerful forces shaping the 21st century: Aging and Technology; it’s really profound and the IEEE is at the intersection of it.
— Neil Steinberg, Director, PBS Documentary: Aging in America: Survive or Thrive

The Demographic Cliff: The 2-to-1 Caregiver Crisis

The global population is shifting at an unprecedented rate, moving from 1 billion people aged 60+ in 2020 to a projected 2.1 billion by 2050. Simultaneously, the population of those aged 80 and older is expected to triple to 426 million (according to the WHO). This “big bubble” is creating a demographic imbalance that traditional care models simply cannot sustain.

How does the caregiver cope with the problem their loved one is experiencing, especially with cognitive decline? There are valuable resources on the internet that provide information and support through social groups and chats.
— Maxine Cohen, Chair, IEEE AgeTech Education Committee

The caregiver is the unsung hero. Historically, there were 6 potential caregivers for every person over 80; by 2050, that ratio will plummet to just 2 to 1. This “caregiver support ratio” decline transforms technology from a lifestyle luxury into a fundamental necessity for survival. The gap between those needing care and those able to provide it is a crisis shared across all sectors of society.

This is a problem that’s being shared across many different areas, and it’s going to take a village to work together to address how to fill the gap: technology, policy, social and clinical networks.
— Maria Palombini, Global Director, IEEE Healthcare and Life Sciences Practice

Design "With," Not "For": Moving Beyond Hospital Plastic

Many AgeTech failures stem from a design gap where 25-year-old engineers create products without accounting for the sensory realities of aging. High-frequency beeps, for instance, are often the first sounds lost to age-related hearing decline, yet they remain the industry standard for alerts. George Arnold, Chair of the IEEE AgeTech Future Directions Initiative, champions the philosophy that we must design with older adults, rather than just for them, to avoid these engineering blind spots.

Current devices also carry a heavy social stigma, often appearing as “hospital plastic” that marks the wearer as a patient rather than a person. When technology feels like a surveillance shackle rather than a stylish accessory, adoption rates crater regardless of the device’s utility. Dignity must be a primary design specification, moving away from the “piece of plastic around your neck” toward invisible, integrated solutions.

Even if you’re old, you don’t want to look old. You don’t want to have a device that looks like you belong in a hospital.
— Hervé Muller, P&GM North America, Telecom Design

The Privacy Paradox: The Danger of 8-Point Font

We are currently asking our elders to sign away their digital dignity in exchange for safety—often using a 8-point font they literally cannot see. This as a critical compliance and “Equality Legislation” issue, as “all-or-nothing” checkboxes fail the test of meaningful consent.

This isn’t just a senior issue; it is a fundamental inclusivity challenge for anyone with vision or cognitive impairments.
— Puja Modha, Partner, Aria Grace Law

To prevent the home from becoming an intrusive environment of surveillance, companies must adopt three core practices: clear, multi-format notices, granular consent options, and “Consent Refreshes” every 3 to 6 months. These check-ins are vital because health and cognitive capacity are not static. Without these safeguards, the “sanctuary” of the home is replaced by a landscape of constant, unconsented monitoring.

The ROI of the "House Call": Prevention Over Reaction

The economic argument for AgeTech is staggering: the cost of one Emergency Department visit is equivalent to the cost of 10 house calls. Programs like PACE (The Program of All-Inclusive Care for the Elderly) in the US and the “Hospital at Home” model prove that proactive care is both more cost-effective and more humane. However, the true “surprising reality” is the shift from Fall Detection to Fall Prediction through biometric gait analysis.

Rather than reacting to a broken hip, we can now use RF (Radio Frequency) sensing to monitor movement in a non-contact way. This eliminates the “Apple Watch problem,” where seniors often fall at night while their consumer tech is charging on a nightstand. Because there is currently no third-party testing for accuracy in senior contexts, the IEEE is establishing global standards to ensure these life-saving sensors are reliable and feasible for eldery use.

The Final Frontier: Engineering for Independence and Dignity

While physical safety often dominates the conversation, social isolation remains the “second challenge” of aging. Loneliness is a profound physiological threat, yet the most impactful AgeTech might simply be the platform that reconnects a senior to a human voice. Whether through AI-based companions for cognitive prompting or low-tech connectivity, the goal is to bridge the gap created by mobility loss.

In Spain, the deployment of 300,000 devices featured a simple button that allowed seniors to talk to a “perfect stranger.” While seemingly low-tech, this initiative proved that the psychological need for connection is as vital as medical monitoring. True innovation in this space doesn’t just watch over a body; it engages a mind and restores a sense of community.

Conclusion: A Call to Action for the Aging Village

The IEEE AgeTech Initiative is more than a technical project; it is a mission to advance technology for humanity’s most universal experience. Aging should be viewed as a privilege, but it is one that requires a new ecosystem of standards built by clinicians, developers, and seniors themselves. We invite volunteers from every discipline to help us engineer a world where longevity is defined by quality, not just years.

As we look toward this inevitable future, we must look past the engineering challenges and focus on the human narrative. If we are all destined to join this demographic, what kind of world are we building for our future selves today?

RESOURCES

The post Beyond the Panic Button: 5 Surprising Realities of the Aging Challenge appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/beyond-the-panic-button-5-surprising-realities-of-the-aging-challenge/feed/ 0
2026 UN STI Forum – Infrastructure for a Sustainable Future: From Innovation to Practical Implementation https://standards.ieee.org/beyond-standards/2026-sti-forum-infrastructure-for-a-sustainable-future/ https://standards.ieee.org/beyond-standards/2026-sti-forum-infrastructure-for-a-sustainable-future/#respond Thu, 14 May 2026 14:41:53 +0000 https://standards.ieee.org/beyond-standards/2026-sti-forum-scalable-digital-infrastructure/ IEEE side event at 2026 UN STI Forum: Infrastructure for a Sustainable Future: From Innovation to Practical Implementation

The post 2026 UN STI Forum – Infrastructure for a Sustainable Future: From Innovation to Practical Implementation appeared first on IEEE Standards Association.

]]>

The 11th Multi-stakeholder Forum on Science, Technology and Innovation (STI) for the Sustainable Development Goals (SDG) took place from 06-07 May 2026 in a hybrid format at the UN headquarters in New York. It focused on the role and contributions of science, technology and innovation to the achievement of the UN SDG under the theme of “Transformative, equitable and coordinated science, technology and innovation for the 2030 Agenda and a sustainable future for all.”

With the goal of bringing into the dialogue current activities, practical approaches and solutions that can help to advance the SDGs, including SDG 9 on Industry, Innovation and Infrastructure, IEEE hosted a side event entitled “Infrastructure for a Sustainable Future: From Innovation to Practical Implementation”. This hybrid session convened global experts Mary Ellen Randall, IEEE President, who provided virtual opening remarks; Jill Gostin, IEEE 2026 President-Elect provided the Keynote address where she addressed the importance of collaboration to turn innovation into something tangible, scalable, and impactful in the real world.  She also noted that standards are enablers, they help improve planning and investment decisions, strengthen procurement processes, ensure interoperability, and reduce risk while accelerating deployment.

Jill Gostin speaking at a 2026 STI Forum event.

Dr. Carl Gahnberg, Director, Policy Development and Research, Internet Society; Nicos Ioannou, Electronic Communications Engineer, Department of Electronic Communications, Deputy Ministry of Research, Innovation and Digital Policy – Cyprus; and Dr. Sajith Wijesuriya, Chair, IEEE Power & Energy Society (PES) Young Professionals Advisory Council. Moderated by Dr. Paulina Chan, Chair, Collaboration and Engagement Committee, IEEE, successful examples were shared on how to connect the unconnected.

The session examined how resilient infrastructure can be strengthened through shared approaches for safety, performance, and cybersecurity-by-design. It highlighted how collaborative approaches can help align deployments with public-interest outcomes, including reliable services, inclusive access, sustainability, and how standards-informed approaches can improve planning, investments and procurement decisions so infrastructure remains scalable, maintainable and future-proof.

Outcomes included a concise set of implementation readiness takeaways, with practical actions to inspire and align pragmatic efforts with measurable results. The takeaways emphasized how progress on SDG 9 acts as an enabler for energy access and sustainable, resilient urban systems, illustrating how coordinated infrastructure and industrial innovation can accelerate multiple goals across the 2030 Agenda.

Panelists underscored the critical role of standardization and community-centered connectivity – networks built by and for local communities – as a proven path in achieving sustainable connectivity and connecting the unconnected.

Cyprus illustrated its goal to achieve 100% gigabit and 5G network coverage by 2026 through strategic investments in fiber-to-the-home (FTTH) infrastructure and connectivity voucher schemes to ensure nationwide high-speed access. This roadmap prioritizes digital inclusion and transformation by aiming for 100% online public services, enhancing citizens’ digital skills, and fostering SME innovation to align with the European Union’s Digital Decade 2030.

Discussants also explored how future energy practitioners must evolve beyond traditional technical roles by mastering digital fluency, resilient design, and systems thinking to directly support SDG 9. It outlined actionable strategies to equip the workforce to drive sustainable industrialization, foster innovation, and build resilient infrastructure.

To watch the session recording and view the presentations, please see the session page.

The post 2026 UN STI Forum – Infrastructure for a Sustainable Future: From Innovation to Practical Implementation appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/2026-sti-forum-infrastructure-for-a-sustainable-future/feed/ 0
2026 UN STI Forum – Scalable Digital Infrastructure: Community Networks, Local Language AI, and Standards-Based Innovation https://standards.ieee.org/beyond-standards/2026-sti-forum-scalable-digital-infrastructure/ https://standards.ieee.org/beyond-standards/2026-sti-forum-scalable-digital-infrastructure/#respond Thu, 14 May 2026 14:41:37 +0000 https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-automotive-in-vehicle-communications/ IEEE-CDOT side event at 2026 UN STI Forum: Scalable Digital Infrastructure: Community Networks, Local Language AI, and Standards-Based Innovation - The Path to SDG Implementation

The post 2026 UN STI Forum – Scalable Digital Infrastructure: Community Networks, Local Language AI, and Standards-Based Innovation appeared first on IEEE Standards Association.

]]>

The 11th Multi-stakeholder Forum on Science, Technology and Innovation (STI) for the Sustainable Development Goals (SDG) took place from 06-07 May 2026 in a hybrid format at the UN headquarters in New York. It focused on the role and contributions of science, technology and innovation to the achievement of the UN SDG under the theme of “Transformative, equitable and coordinated science, technology and innovation for the 2030 Agenda and a sustainable future for all.”

The Center for Development of Telematics (C-DOT), Government of India, together with the IEEE Standards Association (IEEE SA), hosted a side event at this year’s STI Forum entitled “Scalable Digital Infrastructure: Community Networks, Local Language AI, and Standards-Based Innovation – The Path to SDG Implementation”. This virtual session convened global experts Dr. Rajkumar Upadhyay, Chief Executive Officer and Chairman Board, C-DOT, Government of India; Dorothy Stanley, IEEE SA President; Dr. Ashutosh Dutta, Chief 5G Strategist, Applied Physics Lab, Johns Hopkins University, and IEEE Fellow; Talant Sultanov, Chair and Co-Founder, Internet Society Kyrgyz Chapter; and Purva Rajkotia, Director, Connectivity and Telecom Practice, IEEE SA.

The discussion highlighted that despite reliable digital infrastructure being essential to achieving the 2030 Agenda, many communities remain underserved due to cost, limited capacity, and a lack of locally relevant services. This session showcased practical, scalable pathways combining community-led infrastructure, local-language artificial intelligence (AI), and standards-based innovation to advance the SDG implementation.

An impactful initiative covered included Community Radio (CR)-Bolo, integrating community radio, hybrid wireless networks, interactive voice response, and decentralized local-language AI for low-cost services; interoperable and secure public Wi-Fi architecture enabling affordable last-mile hotspots operated by village entrepreneurs; frugal 5G, demonstrating how standards support low-cost, energy-efficient broadband for rural areas; and solar-powered receivers for rural communication.

Additionally, using the example of Kazakhstan, scalable climate monitoring infrastructure was presented, showcasing how the Internet of Things, Low Range Wireless Area Network and Edge AI help rural communities with climate resilience and disaster risk reduction.

These use cases demonstrated how governments, standards bodies, and community networks can help reduce fragmentation, improve affordability and reliability, and promote inclusive innovation. Informed by deployments in rural India and aligned with SDG 6 (clean water and sanitation), SDG 7 (affordable and clean energy), SDG 11 (sustainable cities and communities), and SDG 17 (partnerships for the goals), panelists explored replicable models for inclusive digital ecosystems.To watch the session recording and view the presentations, please see the session page.

The post 2026 UN STI Forum – Scalable Digital Infrastructure: Community Networks, Local Language AI, and Standards-Based Innovation appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/2026-sti-forum-scalable-digital-infrastructure/feed/ 0
Time-Sensitive Networking for Aerospace Onboard Ethernet Communications https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-aerospace-onboard-ethernet-communications/ https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-aerospace-onboard-ethernet-communications/#respond Thu, 14 May 2026 11:00:35 +0000 https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-automotive-in-vehicle-communications/ Modern aircraft depend on real-time onboard communications to perform critical vehicle and mission functions. The aerospace industry recognizes a need for an open, standards-based, high-performance networking solution to interconnect an increasing number of digital components, including sensors, actuators, controllers, processors, displays, and data concentrators.

The post Time-Sensitive Networking for Aerospace Onboard Ethernet Communications appeared first on IEEE Standards Association.

]]>

The IEEE 802.1DP™ / SAE AS6675 standard enables high-bandwidth communication networks for aerospace and defense platforms by leveraging Time-Sensitive Networking (TSN) over standard Ethernet.

Standard IEEE 802.3™ Ethernet has proven to be an efficient and cost-effective technology for decades. Its success is driven by economies of scale and a deep knowledge base developed across industries in the networking community. Since inception, Ethernet has continually evolved to meet the industry’s growing needs. 

As part of this evolution, IEEE 802.1™ Time-Sensitive Networking (TSN) was introduced to support time- and/or mission-critical applications. TSN extends the usability of Ethernet networks to aerospace, automotive, industrial automation, and professional audio/video deployments. In addition to the base standards specifying the technology, TSN profile specifications have been introduced to facilitate interoperability, deployment, and use in specific application areas. IEEE 802.1DP / SAE AS6675 is a profile specification targeting aerospace and defense applications.

Modern aircraft depend on real-time onboard communications to perform critical vehicle and mission functions. The aerospace industry recognizes a need for an open, standards-based, high-performance networking solution to interconnect an increasing number of digital components, including sensors, actuators, controllers, processors, displays, and data concentrators. IEEE 802.1 TSN provides a standard Ethernet-based deterministic solution to not only enable higher Quality of Service (QoS), but also lower the size, weight, and power consumption with a converged zonal network architecture. TSN meets the modularity and open-systems requirements essential to the evolution of the data-distribution digital backbone in modern aerospace platforms.

To address the use of TSN for aerospace applications, the IEEE Standards Association (IEEE SA) and SAE International established a joint project that developed the IEEE 802.1DP / SAE AS6675 “IEEE/SAE Standard for Local and Metropolitan Area Networks—Time-Sensitive Networking for Aerospace Onboard Ethernet Communications” profile specification. This joint project brought together the aerospace industry and networking experts to define a TSN profile that meets the unique requirements of aerospace applications and enables interoperability across implementations. By selecting TSN features, defaults, and a common configuration scheme, the recently published IEEE 802.1DP / SAE AS6675 standard will benefit the developers of TSN products, OEMs integrating TSN in aerospace and defense platforms, and ultimately the users of such platforms.

IEEE 802.1DP / SAE AS6675 provides a Time-Sensitive Networking (TSN) profile specification for designers and implementers of aerospace IEEE 802.3 Ethernet networks, supporting a wide range of aerospace applications. Because IEEE 802.1 standards are intentionally broad and intended for use in a variety of environments, this standard selects the features from IEEE 802.1 standards that are directly applicable to aerospace onboard networks, specifies how to use these features as part of a set of well-defined interoperable profiles, and provides guidance on configuration, management, and monitoring. In so doing, this standard facilitates communications within aerospace platforms to meet the reliability, bandwidth, latency, and synchronization needs of aerospace applications. It also provides necessary and valuable information to airframers, system integrators, and suppliers to help with the design of aerospace systems with standard IEEE 802.3 Ethernet onboard networks.

“The approval of the IEEE 802.1DP / SAE AS6675 Time-Sensitive Networking (TSN) aerospace standard is a major step forward for the future of aircraft systems,” said Jim Hvizd, Computing & Networking Leader for GE Aerospace. “This new standard gives the aerospace industry a clear, shared way to use high-bandwidth TSN Ethernet networking on airplanes, replacing custom, one-off network designs with a common, open approach. GE Aerospace is proud to have contributed to this standard, and we look forward to working with our customers to bring it into their next-generation platforms.”

IEEE 802.1DP / SAE AS6675 is available for purchase at the IEEE Standards Store and at IEEE Xplore.

The post Time-Sensitive Networking for Aerospace Onboard Ethernet Communications appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-aerospace-onboard-ethernet-communications/feed/ 0
Time-Sensitive Networking for Automotive In-Vehicle Communications https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-automotive-in-vehicle-communications/ https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-automotive-in-vehicle-communications/#respond Tue, 12 May 2026 10:00:54 +0000 https://standards.ieee.org/beyond-standards/the-australian-social-media-ban-age-verification-what-does-it-mean-for-your-global-app/ Accurate timing and guaranteed data delivery are critical requirements in automotive environments. Solutions such as IEEE 1588™-based protocols (e.g., IEEE 802.1AS™) can provide timing accuracy in the sub-microsecond range.

The post Time-Sensitive Networking for Automotive In-Vehicle Communications appeared first on IEEE Standards Association.

]]>

The IEEE 802.1DG™ standard defines a Time-Sensitive Networking (TSN) profile for automotive in-vehicle networks, ensuring deterministic, low-latency, and highly reliable communications over standard Ethernet.

Standard IEEE 802.3™ Ethernet has proven to be an efficient and cost-effective networking technology for decades. Its success is driven by economies of scale and depth of expertise across a broad, multi-industry networking community. Since inception, Ethernet has continuously evolved to meet the needs of a wide range of application areas. 

As part of the evolution, IEEE 802.1™ Time-Sensitive Networking (TSN) was introduced to support time- and/or mission-critical applications. TSN extends the usability of Ethernet networks into domains such as automotive, aerospace, industrial automation, and professional audio/video. In addition to the base standards specifying the technology, TSN profile specifications have been introduced to simplify interoperability, deployment, and TSN use within specific application areas. IEEE 802.1DG is a profile specification targeting automotive applications.

Accurate timing and guaranteed data delivery are critical requirements in automotive environments. Solutions such as IEEE 1588™-based protocols (e.g., IEEE 802.1AS™) can provide timing accuracy in the sub-microsecond range. Such accuracy will be required as Ethernet usage expands within the vehicle. In addition, other IEEE and TSN standards provide secure, ultra-reliable, and bounded low-latency communications throughout the vehicle at multiple data rates. 

At the same time, the in-vehicle wiring harness presents significant challenges with regard to weight and space, coupled with higher throughput requirements for automotive Electronic Control Units (ECUs) and sensors. To address these constraints, various PHYs targeting automotive are available today, including single twisted-pair 10 Mb/s (IEEE 802.3cg™), 100 Mb/s (IEEE 802.3bw™), 1 Gb/s (IEEE 802.3bp™), and 2.5/5/10 Gb/s (IEEE 802.3ch™).

The new standard, IEEE 802.1DG-2025 “IEEE Standard for Local and Metropolitan Area Networks—Time-Sensitive Networking Profile for Automotive In-Vehicle Ethernet Communications,” is the first available IEEE standard developed to specify the use of TSN over IEEE 802.3 Ethernet for automotive in-vehicle networks.

IEEE 802.1DG specifies profiles for bounded latency in-vehicle communications based on IEEE 802.3 Ethernet and IEEE 802.1 TSN standards. IEEE 802.1DG brings together existing IEEE 802.1 TSN standards into a unified framework to support scalable, interoperable, and cost-efficient automotive network architectures based on standard Ethernet. The new standard provides information and guidance to automotive vendors and suppliers designing vehicular systems that require bounded latency in automotive in-vehicle networks. It also addresses the use of features from IEEE 802.1 standards to meet the bandwidth, latency, and synchronization needs for communications within automotive vehicles. 

Because IEEE 802.1 standards are intentionally broad and applicable across many environments, IEEE 802.1DG determines the features from IEEE 802.1 standards that are directly applicable to automotive in-vehicle networks and suggests how these features are used, including recommendations about how to configure optional parameters. “IEEE 802.1DG is a key step forward to use standard Ethernet networks in the automotive industry,” said Glenn Parsons, chair, IEEE 802.1 Working Group. “This work facilitates the adoption of standard Ethernet bridged network in vehicles to address industry demand on developments toward autonomous and software-defined vehicles.”

IEEE 802.1DG is available for purchase at the IEEE Standards Store and at IEEE Xplore.

The post Time-Sensitive Networking for Automotive In-Vehicle Communications appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/time-sensitive-networking-for-automotive-in-vehicle-communications/feed/ 0
The Australian Social Media Ban & Age Verification — What Does it Mean for Your Global App? https://standards.ieee.org/beyond-standards/the-australian-social-media-ban-age-verification-what-does-it-mean-for-your-global-app/ https://standards.ieee.org/beyond-standards/the-australian-social-media-ban-age-verification-what-does-it-mean-for-your-global-app/#respond Fri, 10 Apr 2026 16:00:58 +0000 https://standards.ieee.org/beyond-standards/trends-in-online-age-verification-for-2026/ On December 10, 2025, Australia became the first country to enforce a minimum age requirement preventing children under 16 from creating or holding accounts on designated social media platforms.

The post The Australian Social Media Ban & Age Verification — What Does it Mean for Your Global App? appeared first on IEEE Standards Association.

]]>

On December 10, 2025, Australia became the first country to enforce a minimum age requirement preventing children under 16 from creating or holding accounts on designated social media platforms. The Australian Online Safety Amendment Act introduces new compliance expectations around user verification, privacy protections, and platform accountability. For app developers and platform operators worldwide, it represents one example of how governments are examining online child safety and considering stronger regulatory approaches.

The law places responsibility on technology companies, rather than parents or minors, to take steps to prevent underage access. Platforms may face fines of up to AUD $49.5 million (approximately $32–33 million USD) if they fail to take “reasonable steps” to block users under 16. For global apps, a key implication is that regulators may increasingly expect more than basic self-declared age gates. Some organizations are exploring structured approaches such as the IEEE Standard for Online Age Verification to help design risk-based age assurance models that can operate across multiple jurisdictions.

Why Australia’s approach differs from previous age restrictions

Until recently, many countries relied on simple honor-system age gates in which users confirmed they were old enough to access age-restricted services. Australia has moved beyond that model. Instead, platforms are expected to implement what regulators describe as “successive validation.”

This may begin with age inference signals such as IP geolocation, device history, and behavioral patterns. If those indicators suggest a user could be underage, platforms may need to apply stronger checks such as facial age estimation or document-based verification, depending on the service design and assessed risk. Importantly, this is not treated as a one-time verification event. Platforms are expected to monitor for potential circumvention tactics, including VPN usage, fake accounts, or misrepresentation.

Regulators have also indicated that ignoring credible internal indicators of age may be considered non-compliance. If platform data — such as user behavior or engagement patterns — suggests a user is under 16, companies may be expected to respond based on that information. In this framework, age verification becomes an ongoing compliance obligation rather than a static step at account creation.

The privacy paradox: protecting children without over-collecting data

While platforms can face enforcement action from the eSafety Commissioner for insufficient age checks, they may also face scrutiny from the Office of the Australian Information Commissioner if age-assurance practices are overly intrusive. Australia’s model places clear limits on how age-related data may be collected, used, and retained, emphasizing principles such as data minimization, segregation, and purpose limitation.

In practice, this means information gathered for age assurance should be isolated from core business systems. Age-related signals are not intended to be repurposed for advertising, recommendation engines, or user profiling. Instead, they are collected for a narrowly defined regulatory function: determining access eligibility.

Once that determination has been made, platforms may be expected to limit data retention and ensure that age-assurance data does not flow into unrelated analytics or monetization processes. Controls governing storage, access, and deletion therefore become central compliance considerations rather than optional safeguards.

For global platforms that rely heavily on data-driven personalization, these requirements can create operational separation between compliance processes and core product systems. Weak controls may lead not only to privacy risks but also to potential regulatory violations under the Online Safety framework.

What this may mean for app developers outside Australia

Apps with Australian users are already subject to these requirements. However, developers operating in other regions may also be monitoring developments closely. In January 2026, UK Prime Minister Keir Starmer told Parliament he was “alarmed” by children’s screen time and noted that “no option is off the table,” including approaches similar to Australia’s. A Fox News poll reported that 64% of American voters favor a social media ban for children under 16. Norway’s government has also indicated plans to consult on a potential minimum age of 15 for social media.

In the United States, Florida enacted House Bill 3 in March 2024, restricting social media accounts for children under 14 and requiring parental consent for users aged 14 and 15. Although the law has faced constitutional challenges, enforcement has proceeded while litigation continues. For global platforms, this growing mix of state and national regulations can create complex compliance planning considerations.

A broader challenge for developers is not tracking a single law but understanding emerging regulatory patterns. Australia’s approach illustrates how expectations around age assurance may evolve from guidance into enforceable requirements, and how regulatory ideas can spread across jurisdictions.

Design decisions made today about age-assurance architecture, data handling, and escalation logic can influence how easily products adapt in the future. Building flexibility into compliance strategies may help reduce the need for repeated redesigns as new rules are introduced.

The role of standards in global age verification compliance

As regulatory expectations diversify, reference frameworks can help organizations interpret and implement requirements more consistently. The IEEE 2089.1-2024 Standard for Online Age Verification provides one such framework. It identifies six indicators of confidence for age assurance, including accuracy, frequency of assurance, counter-fraud measures, authenticity, frequency of authenticity, and birth-date verification. These indicators help allow organizations to calibrate verification approaches based on regulatory context and risk tolerance.

The standard also addresses privacy considerations by outlining requirements for data security and information systems management specific to age assurance processes. It clarifies roles and responsibilities for different actors in the verification ecosystem, supporting a shared vocabulary among platforms, regulators, and technology providers. For app developers, the Standard for Online Age Verification can serve as a reference point for developing adaptable implementation strategies rather than building separate solutions for each jurisdiction.

Building a more adaptable age verification strategy

Australia’s minimum age requirement took effect on December 10, 2025 following a deferred commencement period, and enforcement activity is now underway. Additional regulatory developments are also emerging. The eSafety Commissioner’s Phase 2 Industry Codes, rolling out through March 2026, extend age-assurance considerations beyond social media to services such as email, messaging, gaming, search engines, hosting platforms, and app stores.

For developers, this reinforces the importance of addressing age verification early in product design rather than treating it as an add-on to onboarding flows. Approaches that can adapt to differing regulatory regimes may help reduce disruption as policies evolve. Platforms that integrate age assurance as a core product capability may find it easier to respond to new compliance expectations.

Legal challenges in several jurisdictions also demonstrate how age-verification rules can become focal points for public and political debate. Restrictive or poorly communicated controls may generate criticism related to overreach or unintended impacts. These dynamics can translate into extended legal exposure, policy friction, and reputational considerations for platform operators.

Preparing your app for evolving age verification expectations

Australia’s policy approach reflects a broader shift in how governments are examining online child safety and platform responsibility. For app developers, the practical question is how to approach age verification in ways that balance compliance obligations, privacy protections, and user experience considerations.

As additional jurisdictions explore similar regulatory models, developers who treat age assurance as a foundational design factor may be better positioned to adapt. Aligning implementation strategies with established frameworks such as the IEEE Standard for Online Age Verification may provide one pathway for navigating evolving global expectations.

The post The Australian Social Media Ban & Age Verification — What Does it Mean for Your Global App? appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/the-australian-social-media-ban-age-verification-what-does-it-mean-for-your-global-app/feed/ 0
Use of AI for Human Powered Advertising https://standards.ieee.org/industry-connections/activities/use-of-ai-for-human-powered-advertising/ Wed, 08 Apr 2026 17:17:55 +0000 https://standards.ieee.org/?page_id=34715 The goal of this Initiative is to convene leaders across industry, engineering, policy, academia, legal domains and consumer advocates to help organizations operationalize responsible AI deployment (all applications and systems, not only GenAI / Agentic AI) via any and all technologies supporting advertising, marketing, PR and the firms representing the world’s largest brands utilizing partners to communicate messaging and connect to their stakeholders.

The post Use of AI for Human Powered Advertising appeared first on IEEE Standards Association.

]]>

About the Activity

Artificial intelligence (all forms) is rapidly becoming the operating infrastructure of advertising and marketing, shaping how brands communicate, how consumers engage, and how economic decisions are influenced at global scale. As organizations accelerate AI adoption, governance, accountability, and trust are emerging as the primary constraints on responsible innovation and sustainable performance.
 
Advertising, representing approximately $1.4 trillion in annual global spend, now provides the largest vertical and paradigm for the proliferation of multiple technologies linked to all AI systems, but lacks the responsible frameworks, standardization, and certification infrastructures every other high-risk profession has. That mismatch is where harm and liability emerge and provides the incentive for this Initiative.

Goals of the Activity

The goal of this Initiative is to convene leaders across industry, engineering, policy, academia, legal domains and consumer advocates to help organizations operationalize responsible AI deployment (all applications and systems, not only GenAI / Agentic AI) via any and all technologies supporting advertising, marketing, PR and the firms representing the world’s largest brands utilizing partners to communicate messaging and connect to their stakeholders.

Getting Involved

Who Should Get Involved

  • Advertisers
  • Marketers
  • Public Relations Experts
  • Consulting firms
  • Anthropologists dealing with communications
  • Technologists involved in any existing AI (any applications) or personal data oriented standards, certifications or efforts focused on responsible AI

How to Get Involved

To learn more about the program and how to join the Use of AI for Human Powered Advertising, please express your interest by completing the Use of AI for Human Powered Advertising interest form.

Contacts

The post Use of AI for Human Powered Advertising appeared first on IEEE Standards Association.

]]>
Artificial Intelligence (AI) and Cybersecurity: Emerging Risks, Big Opportunities and the Path to Trust https://standards.ieee.org/beyond-standards/artificial-intelligence-ai-and-cybersecurity-emerging-risks-big-opportunities-and-the-path-to-trust/ https://standards.ieee.org/beyond-standards/artificial-intelligence-ai-and-cybersecurity-emerging-risks-big-opportunities-and-the-path-to-trust/#comments Thu, 26 Mar 2026 10:00:28 +0000 https://standards.ieee.org/?p=33957 Artificial Intelligence (AI) certification and cybersecurity requirements are rapidly evolving from emerging best practices into foundational expectations across industries, education and regulation.

The post Artificial Intelligence (AI) and Cybersecurity: Emerging Risks, Big Opportunities and the Path to Trust appeared first on IEEE Standards Association.

]]>

Artificial Intelligence (AI) certification and cybersecurity requirements are rapidly evolving from emerging best practices into foundational expectations across industries, education and regulation. As AI systems become more prevalent and complex, institutions and governments across the globe are responding with formal training mandates and expanded government frameworks aimed at managing risk, accountability and security. 

These developments reflect a global shift toward certifications and requirements, which consumers and businesses increasingly rely on to demonstrate trustworthiness and guide technology use.

Trends in AI Certification and Cybersecurity

As AI Certification and Cybersecurity emerge, several trends are shaping the global landscape in 2026:

  1. AI Compliance Requirements are Becoming Mandatory
    While no certification is required yet, legal and regulatory obligations around AI use are rapidly becoming mandatory. Organizations are increasingly expected to demonstrate compliance with emerging laws, governance frameworks, and risk management standards as AI becomes more widely regulated.
  2. Cybersecurity Standards Adapt to AI
    Security certifications now include checks for AI-related risks. IEEE Standards Association (IEEE SA) provides projects and standards that serve as a framework to help prove that an organization can handle new AI-powered threats and reduce risk. 
  3. Quantum Safe & Zero Trust Certifications
    Certifications now cover encryption designed to withstand future quantum computing threats, as well as advanced identity and access models such as AI-powered Zero Trust. These certifications give organizations and users confidence that systems are secure today, while remaining prepared for emerging risks. The IEEE P1943 Standard for Post-Quantum Network Security outlines how existing network protocols can be adapted to remain secure against future quantum threats using hybrid cryptography and quantum-safe protections. 
  4. Privacy & Ethical AI Certifications
    As AI adoption accelerates, new certifications are emerging to verify that AI systems protect user privacy, mitigate bias, and operate transparently. These certifications focus on the ethical impact of AI, not just technical performance and are increasingly important as organizations work to build trust and meet evolving expectations around responsible AI use.
  5. Workforce & Organizational Readiness
    AI and cybersecurity certifications are expanding beyond individual skills to address organizational readiness. New onboarding and training programs help employees and teams responsible manage, secure, and deploy AI across an organization, ensuring consistent governance and reduce risk as AI adoption scales.

Use Cases for Good

AI certification and cybersecurity extend beyond theory, offering practical frameworks that help organizations manage risk, protect data and deploy AI responsibly. These real-world applications are examples that show how responsible AI can drive meaningful impact across critical sectors.

  • Healthcare: AI is transforming healthcare systems by enhancing diagnostic accuracy, streamlining administrative tasks and enabling predictive analytics that support better patient outcomes. Advanced security tools can also detect unusual network activity that resembles ransomware and intervene immediately, helping safeguard sensitive patient data before damage occurs.
  • Banks: As scammers leverage AI, financial institutions are responding with equally advanced AI-driven defense systems. These tools can significantly reduce account takeover attempts and identify fraudulent behavior in real time, offering customers stronger protection from financial threats.
  • Email Platforms: Email providers and search engines rely on machine learning to identify harmful content, block phishing attempts and filter out deceptive or malicious messages. These security layers work quietly in the background, protecting billions of users every day.

How IEEE SA is Supporting AI Certification & Cybersecurity Advancements

AI is rapidly transforming both technology and everyday life. Its ability to generate insights at unprecedented speed makes it a powerful tool, but when misused or poorly implemented, it can also spread inaccurate information and create new vulnerabilities. That’s why trustworthy, secure and ethical AI systems are more important than ever.

IEEE SA is helping shape this future by developing standards that strengthen trust, interoperability and security. Our work includes performance metrics, testing guidelines, and frameworks that organizations can adopt to build responsible and resilient systems.

Key initiatives include:

  • IEEE Ethics for AI System Design Training, which helps professionals to integrate ethical principles along with functional values in designing and developing systems into AI development.
  • IEEE CertifAIEd Product Certification, offering small and mid-sized organizations an accessible pathway to evaluate ethical AI implementation without requiring a PhD in machine learning or an expensive budget.

We also support organizations through cybersecurity frameworks such as:

In addition to standards and certification efforts, IEEE SA supports hands-on learning and community engagement through initiatives such the upcoming IEEE SA Cybersecurity Hackathon 2026: “TIPPSS & Tricks: Hack the Threat.” The hackathon brings together innovators, professionals, students and cybersecurity enthusiasts to explore emerging threats, the protection of AI systems and the advancement of global standards. Registration for the hackathon starts on April 14. If interested, visit the event website to learn more and sign up.

IEEE SA remains committed to supporting organizations as they navigate AI integration and strengthen cybersecurity practices. To learn more about AI certification, cybersecurity initiatives, or opportunities to participate in standards development, visit the IEEE SA website and the IEEE Standards & Projects for Cybersecurity page.

The post Artificial Intelligence (AI) and Cybersecurity: Emerging Risks, Big Opportunities and the Path to Trust appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/artificial-intelligence-ai-and-cybersecurity-emerging-risks-big-opportunities-and-the-path-to-trust/feed/ 1
Shaping the Future of Responsible AI: Key Takeaways from the India AI Impact Summit 2026 https://standards.ieee.org/beyond-standards/shaping-the-future-of-responsible-ai-key-takeaways-from-the-india-ai-impact-summit-2026/ https://standards.ieee.org/beyond-standards/shaping-the-future-of-responsible-ai-key-takeaways-from-the-india-ai-impact-summit-2026/#respond Tue, 24 Mar 2026 10:00:58 +0000 https://standards.ieee.org/?p=33805 IEEE SA was proud to participate in the India AI Impact Summit 2026, an event that has quickly become one of the most influential conversations on the future of artificial intelligence (AI) in the region.

The post Shaping the Future of Responsible AI: Key Takeaways from the India AI Impact Summit 2026 appeared first on IEEE Standards Association.

]]>

IEEE SA was proud to participate in the India AI Impact Summit 2026, an event that has quickly become one of the most influential conversations on the future of artificial intelligence (AI) in the region. Representatives from IEEE SA, including Srikanth (Sri) Chandrasekaran, Country Head & Senior Director of IEEE India; Moira Patterson, Global Market Affairs & Community Engagement Director IEEE SA; and Alpesh Shah, Managing Director IEEE Standards, contributed as panelists, moderators and engaged participants. 

The summit, held in New Delhi, brings together global leaders to explore how artificial intelligence can drive economic growth, strengthen pubic services and support sustainable development. IEEE SA is honored to help advance this important work by supporting responsible innovation and emerging AI standards.

Laying the Groundwork for a Trustworthy Information Future

Moira Patterson moderated the session, and Alpesh Shah participated as a panelist on “Information Integrity as an Infrastructure for Trust-Empowering Youth and Future Generations.” The discussion focused on responsible design approaches and the importance of scaling solutions that reflect broader global participation. 

The panel examined what is required to strengthen today’s environment, including age-appropriate design principles and the introduction of global trust challenges. The conversation emphasized the need to move from policy aspirations to ethical commitments that support a safer digital future.

Other panelists included:

  • Amir Banifatemi, Founder of AI Commons & Chief Responsible AI Officer at Cognizant
  • Gilles Fayad, Fellow at Mohammed bin Rashid School of Government (MBRSG)
  • Yuko Harayama, Global Partnership on AI (GPAI) Tokyo Expert Support Center
  • Mohammed Misbahuddin, C-DAC Bengaluru
  • Karine Perset, OECD.AI
  • Gabriele Ramos, formerly UNESCO
  • Mariagrazia Squicciarini, UNSECO
  • Uyi Stewart, Vice President, Inclusive Innovation & Analytics at Mastercard

Empowering Teams for the AI-Driven Future

As part of the summit, IEEE SA partnered with Centre for Development of Advanced Computing (C-DAC) on a workshop titled “AI Capacity Building-Scaling Knowledge, Driving Innovation,” focusing on human capital development across the AI ecosystem.

Sri Chandrasekaran served as a panelist. The session brought together key leaders from the government and education industry in examining how inclusive, well-structured ecosystems can strengthen AI readiness and accelerate innovation across communities.

The session examined what meaningful capacity building requires, from foundational AI literacy to advanced research skills, and explored how scalable frameworks and integrated standards can support long-term growth.

India AI Impact Summit

The session examined what meaningful capacity building requires, from foundational AI literacy to advanced research skills, and explored how scalable frameworks and integrated standards can support long-term growth.

Other Panelists include:

  • Dr. Vit Dockal, Director, International Development Research Centre (INDRC) & CLARA SB Chair
  • Dr. Adv. Lalit Patil, Ethics & Security Specialist, INDRC
  • Shri Gokulatheerthan M, Scientist F, C-DAC
  • Moderated by Shri Ramesh N.aveti, Scientist F, C-DAC

The Importance of Global Trust

In addition, Shah participated in a session focused on the global trust challenge. The discussion highlighted the importance of empowering future generations and emphasized the need for multilateral collaboration to build trusted information standards and structures. 

This aligns with the recent launch of IEEE SA’s participation in the Global Trust Challenge, a global, multi-stakeholder initiative designed to move ideas for more trustworthy AI and digital information ecosystems from concepts to real-world testing. This challenge brings together policymakers and researchers to test what works, surface evidence, and support solutions that can be responsibly scaled.

Key Outcomes from the AI Summit

A central theme emerged across the summit: long-term progress in artificial intelligence depends on responsible frameworks that strengthen trust, safety and digital readiness. The conversations throughout the event highlighted that the goal is not to identify a single solution. Instead, the focus is on equipping organizations such as IEEE SA with the guidance and resources needed to advance standards, refine governance models and support innovation. As the digital world evolves, we must build systems that can evolve alongside it and protect users while enabling meaningful participation. 

Join the Global Trust Challenge

Get involved in standards development

The post Shaping the Future of Responsible AI: Key Takeaways from the India AI Impact Summit 2026 appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/shaping-the-future-of-responsible-ai-key-takeaways-from-the-india-ai-impact-summit-2026/feed/ 0
LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust https://standards.ieee.org/beyond-standards/lg-ai-researchs-2025-ethical-priorities-building-ai-that-earns-trust/ https://standards.ieee.org/beyond-standards/lg-ai-researchs-2025-ethical-priorities-building-ai-that-earns-trust/#respond Thu, 19 Mar 2026 10:00:44 +0000 https://standards.ieee.org/?p=33796 In its 2025 Accountability Report on AI Ethics, LG AI Research highlighted its focus on translating ethical principles into operational practice. The organization strengthened its internal governance, expanded tools that help identify risk, and emphasized processes that make AI systems more reliable and fair.

The post LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust appeared first on IEEE Standards Association.

]]>

In its 2025 Accountability Report on AI Ethics, LG AI Research highlighted its focus on translating ethical principles into operational practice. The organization strengthened its internal governance, expanded tools that help identify risk, and emphasized processes that make AI systems more reliable and fair.

Key initiatives included:

  • Scaled AI Ethical Impact Assessments: Roughly 60 projects underwent structured review, identifying 219 potential risks and closing about 82% of them. The remainder are linked to future projects, ensuring that mitigation measures continue to be addressed over time. This process helps ensure that ethical considerations influence projects early, rather than after deployment. 
  • Enhanced AI risk taxonomy: LG expanded its K-AUT framework to 226 detailed risk categories, covering areas such as privacy, social safety, and emerging risks in advanced AI systems. This taxonomy guides consistent evaluation across teams. 
  • Model safety verification: The organization employed internal and external red-teaming and used its KGC-SAFETY benchmark to test models across multilingual and adversarial scenarios, improving resilience and reducing unsafe outputs. 
  • Data provenance and compliance: Using EXAONE Nexus, LG introduced automated data tracing that achieved 81% accuracy and operated 45× faster than human review—helping identify copyright risks in large training datasets. 

Together, these efforts illustrate LG AI Research’s commitment to building systems that are transparent in their development, thoughtful in their use of data, and proactive about the real-world risks AI can introduce.

The IEEE SA Partnership: Strengthening Verification and Raising the Bar

A major milestone highlighted in the report is LG AI Research’s formal collaboration with the IEEE Standards Association (IEEE SA). In 2024, LG became the first organization in Korea qualified as an Authorized Assessor for the IEEE CertifAIEd™ program, a global initiative that evaluates AI systems across the pillars of Accountability, Privacy, Transparency, and Algorithmic Bias. Through this partnership:

  • LG AI Research began conducting official IEEE CertifAIEd assessments, applying the program’s structured verification process to real AI products.
  • The collaboration supported the certification of LG Electronics’ ThinQ ON, which became the first AI product globally to receive IEEE CertifAIEd™—a result verified through IEEE SA’s independent multi-stage review process. 

The report details how this certification process works, from determining assessment scope to documentation review and IEEE SA’s independent validation. CertifAIEd™ provides a repeatable way for companies to demonstrate that their AI meets recognized ethical benchmarks before entering the market.

LG’s participation in this program is significant not just for the company, but for the broader AI ecosystem. By applying standards-based evaluation internally and sharing insights externally, LG is helping advance the practical adoption of AI ethics frameworks beyond regulatory compliance.

What’s Next: How Other Organizations Can Pursue Responsible AI Certification

As global expectations for safe and trustworthy AI continue to grow, more organizations are exploring structured ways to demonstrate responsible development. IEEE SA’s CertifAIEd™ program provides one such pathway, offering:

  • Assessment of AI systems against established ethical criteria
  • Professional training for teams involved in AI design, risk, and compliance
  • Curriculum options for organizations and academic institutions that want to integrate responsible AI concepts more formally

These options allow companies to start where it makes sense for them—whether by validating a product, training internal experts, or laying the foundation of knowledge across their workforce.

The example set by LG AI Research in 2025 shows the value of combining internal governance with external verification: organizations gain clarity, customers gain confidence, and the industry gains a more consistent standard for responsible AI.

A Path Forward

LG AI Research’s progress reflects a larger shift underway: moving from high-level ethical goals to concrete, testable practices. Its collaboration with IEEE SA demonstrates how independent assessment can complement internal governance, offering transparency and reinforcing accountability at scale.

Organizations seeking to strengthen their own responsible AI programs can look to this model, pairing in-house controls with recognized external standards, to build systems that earn trust and meet global expectations.

Learn more about IEEE CertifAIEd and how it can help strengthen your AI solution or become an IEEE authorized collaboration partner.

The post LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust appeared first on IEEE Standards Association.

]]>
https://standards.ieee.org/beyond-standards/lg-ai-researchs-2025-ethical-priorities-building-ai-that-earns-trust/feed/ 0