Category Archives: @NIST

Loading
loading...

Standards Curricula Program

How to Apply | Awardees 2012-2025 | News Items

NIST Headquarters (Click on image)

2024 Update: NIST Awards Funding to 8 Universities to Advance Standards Education


The Standards Coordination Office of the National Institute of Standards and Technology conducts standards-related programs, and provides knowledge and services that strengthen the U.S. economy and improve the quality of life.  Its goal is to equip U.S. industry with the standards-related tools and information necessary to effectively compete in the global marketplace. 

Every year it awards grants to colleges and universities through its Standards Services Curricula Cooperative Agreement Program  to provide financial assistance to support curriculum development for the undergraduate and/or graduate level. These cooperative agreements support the integration of standards and standardization information and content into seminars, courses, and learning resources. The recipients will work with NIST to strengthen education and learning about standards and standardization. 

The 2019 grant cycle will require application submissions before April 30, 2019 (contingent upon normal operation of the Department of Commerce).  Specifics about the deadline will be posted on the NIST and ANSI websites.  We will pass on those specifics as soon as they are known.

The winners of the 2018 grant cycle are Bowling Green State University, Michigan State University,  Oklahoma State University, and Texas A&M University. (Click here)

The University of Michigan received an award during last year’s grant cycle (2017).   An overview of the curriculum — human factors in automotive standards  — is linked below:

NIST Standards Curricula INTRO Presentation _ University of Michigan Paul Green

Information about applying for the next grant cycle is available at this link (Click here) and also by communicating with Ms. Mary Jo DiBernardo (301-975-5503; maryjo.dibernardo@nist.gov)

LEARN MORE:

Click here for link to the previous year announcement.

Technical Requirements for Weighing & Measuring Devices

Three Felonies a Day: How the Feds Target the Innocent

 

Technical Barriers to Trade

World According to Marco Polo

 

We track action in international administrative procedures that affect the safety and sustainability agenda of the education facility industry.  From time to time we find product purchasing contracts that contain “boilerplate” requiring conformity to applicable regulations found in the Agreement on Technical Barriers to Trade (TBT).   Common examples are found in contracts for the acquisition of information technology and specialty laboratory equipment.

The World Trade Organization TBT Agreement obliges all Parties  to maintain an inquiry point that is able to answer questions from interested parties and other WTO Members regarding technical regulations, standards developed by government bodies, and conformity assessment procedures, as well as provide relevant documents.  The TBT Agreement also requires that WTO Members notify the WTO of proposed technical regulations and conformity assessment procedures so interested parties can become acquainted with them and have an opportunity to submit written comments.

Technical Barriers to Trade Information Management System

The inquiry point and notification authority for the United States is operated by the National Institute of Standards and Technology an agency within the U.S. Department of Commerce.  We provide a link here for the convenience of faculty, specifiers and purchasing professionals.

Notify U.S. Standards Coordination Office USA WTO Enquiry Point

We include the TBT on the agenda of our Hello World! colloquium; open to everyone.  See our CALENDAR for the next online meeting.

 


More

 

Artificial Intelligence Standards

U.S. Artificial Intelligence Safety Institute

ANSI Response to NIST “A Plan for Global Engagement on AI Standards”

On April 29, 2024 NIST released a draft plan for global engagement on AI standards.

Comments are due by June 2. More information is available here.

 

Request for Information Related to NIST’s Assignments

Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence 

The National Institute of Standards and Technology seeks information to assist in carrying out several of its responsibilities under the Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023. Among other things, the E.O. directs NIST to undertake an initiative for evaluating and auditing capabilities relating to Artificial Intelligence (AI) technologies and to develop a variety of guidelines, including for conducting AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.

Regulations.GOV Filing: NIST-2023-0009-0001_content

Browse Posted Comments (72 as of February 2, 2024 | 12:00 EST)

Standards Michigan Public Comment

Attention Is All You Need | Authors: Ashish Vaswani et al. (2017).  This groundbreaking paper introduced the Transformer architecture, replacing recurrent layers with self-attention mechanisms to enable parallelizable, efficient sequence modeling. It laid the foundational blueprint for all subsequent LLMs, revolutionizing natural language processing by capturing long-range dependencies without sequential processing.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | Authors: Jacob Devlin et al. (2018). BERT pioneered bidirectional pre-training via masked language modeling, allowing models to understand context from both directions. As an encoder-only Transformer, it achieved state-of-the-art results on 11 NLP tasks and established the pre-training/fine-tuning paradigm that underpins bidirectional LLMs like those in search and classification.
Training Compute-Optimal Large Language Models | Authors: Jordan Hoffmann et al. (2022).  Known as the Chinchilla paper, it revealed that optimal LLM performance requires balanced scaling of model size and data volume (e.g., 70B parameters trained on 1.4T tokens outperform larger models with less data). This shifted research toward data-efficient training, influencing efficient LLM development.


Unleashing American Innovation

Federal Agency Conformity Assessment

Time & Frequency Services

Technical Requirements for Weighing & Measuring Devices

Why You Need Standards

Summer Internship Research Fellowship

A Study of Children’s Password Practices

Human Factors Using Elevators in Emergency Evacuation

Cloud Computing Paradigm

What is time?

Readings / Radio Controlled Clocks

Standard Reference Material

Standard Reference Material

Metrology is the scientific discipline that deals with measurement, including both the theoretical and practical aspects of measurement. It is a broad field that encompasses many different areas, including length, mass, time, temperature, and electrical and optical measurements.  The goal of metrology is to establish a system of measurement that is accurate, reliable, and consistent. This involves the development of standards and calibration methods that enable precise and traceable measurements to be made.

The International System of Units is the most widely used system of units today and is based on a set of seven base units, which are defined in terms of physical constants or other fundamental quantities.  Another important aspect of metrology is the development and use of measurement instruments and techniques. These instruments and techniques must be designed to minimize errors and uncertainties in measurements, and they must be calibrated against recognized standards to ensure accuracy and traceability.

Metrology also involves the development of statistical methods for analyzing and interpreting measurement data. These methods are used to quantify the uncertainty associated with measurement results and to determine the reliability of those results.

National Institute for Standards & Technology

Federal Participation in Consensus Standards

ARCHIVE: UM Welcomes ANSI 2015

Why You Need Standards

Department of Justice Antitrust Case Filings

When we talk about standards in our personal lives, we might think about the quality we expect in things such as restaurants and first dates. But the standards that exist in science and technology have an even greater impact on our lives. Technical standards keep us safe, enable technology to advance, and help businesses succeed. They quietly make the modern world tick and prevent technological problems that you might not realize could even happen…”

Technical Requirements for Weighing & Measuring Devices

Innovation and Competitiveness in Artificial Intelligence

The International Trade Administration (ITA) of the U.S. Department of Commerce (DOC) is requesting public comments to gain insights on the current global artificial intelligence (AI) market. Responses will provide clarity about stakeholder concerns regarding international AI policies, regulations, and other measures which may impact U.S. exports of AI technologies. Additionally, the request for information (RFI) includes inquiries related to AI standards development. ANSI encourages relevant stakeholders to respond by ITA’s deadline of October 17, 2022.

Fueling U.S. Innovation and Competitiveness in AI: Respond to International Trade Administration’s Request for Information

Commerce Department Launches the National Artificial Intelligence Advisory Committee

 

AI Risk Management Framework

 

We list notable NIST projects or efforts related to LLMs, based on available information from NIST’s publications and initiatives. These projects emphasize NIST’s role in advancing measurement science, standards, and guidelines for trustworthy AI systems, including LLMs. Note that some projects are specific studies, while others are broader programs that encompass LLMs.
  • Evaluating LLMs for Real-World Vulnerability Repair in C/C++ Code
    NIST conducted a study to evaluate the capability of advanced LLMs, such as ChatGPT-4 and Claude, in repairing memory corruption vulnerabilities in real-world C/C++ code. The project curated 223 code snippets with vulnerabilities like memory leaks and buffer errors, assessing LLMs’ proficiency in generating localized fixes. This work highlights LLMs’ potential in automated code repair and identifies limitations in handling complex vulnerabilities.
  • Translating Natural Language Specifications into Access Control Policies
    This project explores the use of LLMs for automated translation and information extraction of access control policies from natural language sources. By leveraging prompt engineering techniques, NIST demonstrated improved efficiency and accuracy in converting human-readable requirements into machine-interpretable policies, advancing automation in security systems.
  • Assessing Risks and Impacts of AI (ARIA) Program
    NIST’s ARIA program evaluates the societal risks and impacts of AI systems, including LLMs, in realistic settings. The program includes a testing, evaluation, validation, and verification (TEVV) framework to understand LLM capabilities, such as controlled access to privileged information, and their broader societal effects. This initiative aims to establish guidelines for safe AI deployment.
  • AI Risk Management Framework (AI RMF)
    NIST developed the AI RMF to guide the responsible use of AI, including LLMs. This framework provides a structured approach to managing risks associated with AI systems, offering tools and benchmarks for governance, risk assessment, and operationalizing trustworthy AI across various sectors. It’s widely applied in LLM-related projects.
  • AI Standards “Zero Drafts” Pilot Project
    Launched to accelerate AI innovation, this project focuses on developing AI standards, including those relevant to LLMs, through an open and collaborative process. It aims to create flexible guidelines that evolve with LLM advancements, encouraging input from stakeholders to ensure robust standards.
  • Technical Language Processing (TLP) Tutorial
    NIST collaborated on a TLP tutorial at the 15th Annual Conference of the Prognostics and Health Management Society to foster awareness and education on processing large volumes of text using machine learning, including LLMs. The project explored how LLMs can assist in content analysis and topic modeling for research and engineering applications.
  • Evaluation of LLM Security Against Data Extraction Attacks
    NIST investigated vulnerabilities in LLMs, such as training data extraction attacks, using the example of GPT-2 (a predecessor to modern LLMs). This project, referencing techniques developed by Carlini et al., aims to understand and mitigate privacy risks in LLMs, contributing to safer model deployment.
  • Fundamental Research on AI Measurements
    As part of NIST’s AI portfolio, this project conducts fundamental research to establish scientific foundations for measuring LLM performance, risks, and interactions. It includes developing evaluation metrics, benchmarks, and standards to ensure LLMs are reliable and trustworthy in diverse applications.
  • Adversarial Machine Learning (AML) Taxonomy for LLMs
    NIST developed a taxonomy of adversarial machine learning attacks, including those targeting LLMs, such as evasion, data poisoning, privacy, and abuse attacks. This project standardizes terminology and provides guidance to enhance LLM security against malicious manipulations, benefiting both cybersecurity and AI communities.
  • Use-Inspired AI Research for LLM Applications
    NIST’s AI portfolio includes use-inspired research to advance LLM applications across government agencies and industries. This project develops guidelines and tools to operationalize LLMs responsibly, focusing on practical implementations like text summarization, translation, and question-answering systems.

Remarks:

  • These projects reflect NIST’s focus on evaluating, standardizing, and securing LLMs rather than developing LLMs themselves. NIST’s role is to provide frameworks, guidelines, and evaluations to ensure trustworthy AI.
  • Some projects, like ARIA and AI RMF, are broad programs that encompass LLMs among other AI systems, but they include specific LLM-related evaluations or applications.

 

Layout mode
Predefined Skins
Custom Colors
Choose your skin color
Patterns Background
Images Background
error: Content is protected !!
Skip to content