Artificial Intelligence Standards

"The urge to save humanity is almost always a false front for the urge to rule" - H.L. Mencken

Loading
loading...

Artificial Intelligence Standards

October 13, 2025
mike@standardsmichigan.com

U.S. Artificial Intelligence Safety Institute

ANSI Response to NIST “A Plan for Global Engagement on AI Standards”

On April 29, 2024 NIST released a draft plan for global engagement on AI standards.

Comments are due by June 2. More information is available here.

 

Request for Information Related to NIST’s Assignments

Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence 

The National Institute of Standards and Technology seeks information to assist in carrying out several of its responsibilities under the Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023. Among other things, the E.O. directs NIST to undertake an initiative for evaluating and auditing capabilities relating to Artificial Intelligence (AI) technologies and to develop a variety of guidelines, including for conducting AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.

Regulations.GOV Filing: NIST-2023-0009-0001_content

Browse Posted Comments (72 as of February 2, 2024 | 12:00 EST)

Standards Michigan Public Comment

Attention Is All You Need | Authors: Ashish Vaswani et al. (2017).  This groundbreaking paper introduced the Transformer architecture, replacing recurrent layers with self-attention mechanisms to enable parallelizable, efficient sequence modeling. It laid the foundational blueprint for all subsequent LLMs, revolutionizing natural language processing by capturing long-range dependencies without sequential processing.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | Authors: Jacob Devlin et al. (2018). BERT pioneered bidirectional pre-training via masked language modeling, allowing models to understand context from both directions. As an encoder-only Transformer, it achieved state-of-the-art results on 11 NLP tasks and established the pre-training/fine-tuning paradigm that underpins bidirectional LLMs like those in search and classification.
Training Compute-Optimal Large Language Models | Authors: Jordan Hoffmann et al. (2022).  Known as the Chinchilla paper, it revealed that optimal LLM performance requires balanced scaling of model size and data volume (e.g., 70B parameters trained on 1.4T tokens outperform larger models with less data). This shifted research toward data-efficient training, influencing efficient LLM development.


Unleashing American Innovation

Federal Agency Conformity Assessment

Time & Frequency Services

Technical Requirements for Weighing & Measuring Devices

Why You Need Standards

Summer Internship Research Fellowship

A Study of Children’s Password Practices

Human Factors Using Elevators in Emergency Evacuation

Cloud Computing Paradigm

What is time?

Readings / Radio Controlled Clocks

Standard Reference Material

Layout mode
Predefined Skins
Custom Colors
Choose your skin color
Patterns Background
Images Background
error: Content is protected !!
Skip to content