Category Archives: @NIST

Loading
loading...

Why You Need Standards

Department of Justice Antitrust Case Filings

When we talk about standards in our personal lives, we might think about the quality we expect in things such as restaurants and first dates. But the standards that exist in science and technology have an even greater impact on our lives. Technical standards keep us safe, enable technology to advance, and help businesses succeed. They quietly make the modern world tick and prevent technological problems that you might not realize could even happen…”

Technical Requirements for Weighing & Measuring Devices

Innovation and Competitiveness in Artificial Intelligence

The International Trade Administration (ITA) of the U.S. Department of Commerce (DOC) is requesting public comments to gain insights on the current global artificial intelligence (AI) market. Responses will provide clarity about stakeholder concerns regarding international AI policies, regulations, and other measures which may impact U.S. exports of AI technologies. Additionally, the request for information (RFI) includes inquiries related to AI standards development. ANSI encourages relevant stakeholders to respond by ITA’s deadline of October 17, 2022.

Fueling U.S. Innovation and Competitiveness in AI: Respond to International Trade Administration’s Request for Information

Commerce Department Launches the National Artificial Intelligence Advisory Committee

 

AI Risk Management Framework

 

We list notable NIST projects or efforts related to LLMs, based on available information from NIST’s publications and initiatives. These projects emphasize NIST’s role in advancing measurement science, standards, and guidelines for trustworthy AI systems, including LLMs. Note that some projects are specific studies, while others are broader programs that encompass LLMs.
  • Evaluating LLMs for Real-World Vulnerability Repair in C/C++ Code
    NIST conducted a study to evaluate the capability of advanced LLMs, such as ChatGPT-4 and Claude, in repairing memory corruption vulnerabilities in real-world C/C++ code. The project curated 223 code snippets with vulnerabilities like memory leaks and buffer errors, assessing LLMs’ proficiency in generating localized fixes. This work highlights LLMs’ potential in automated code repair and identifies limitations in handling complex vulnerabilities.
  • Translating Natural Language Specifications into Access Control Policies
    This project explores the use of LLMs for automated translation and information extraction of access control policies from natural language sources. By leveraging prompt engineering techniques, NIST demonstrated improved efficiency and accuracy in converting human-readable requirements into machine-interpretable policies, advancing automation in security systems.
  • Assessing Risks and Impacts of AI (ARIA) Program
    NIST’s ARIA program evaluates the societal risks and impacts of AI systems, including LLMs, in realistic settings. The program includes a testing, evaluation, validation, and verification (TEVV) framework to understand LLM capabilities, such as controlled access to privileged information, and their broader societal effects. This initiative aims to establish guidelines for safe AI deployment.
  • AI Risk Management Framework (AI RMF)
    NIST developed the AI RMF to guide the responsible use of AI, including LLMs. This framework provides a structured approach to managing risks associated with AI systems, offering tools and benchmarks for governance, risk assessment, and operationalizing trustworthy AI across various sectors. It’s widely applied in LLM-related projects.
  • AI Standards “Zero Drafts” Pilot Project
    Launched to accelerate AI innovation, this project focuses on developing AI standards, including those relevant to LLMs, through an open and collaborative process. It aims to create flexible guidelines that evolve with LLM advancements, encouraging input from stakeholders to ensure robust standards.
  • Technical Language Processing (TLP) Tutorial
    NIST collaborated on a TLP tutorial at the 15th Annual Conference of the Prognostics and Health Management Society to foster awareness and education on processing large volumes of text using machine learning, including LLMs. The project explored how LLMs can assist in content analysis and topic modeling for research and engineering applications.
  • Evaluation of LLM Security Against Data Extraction Attacks
    NIST investigated vulnerabilities in LLMs, such as training data extraction attacks, using the example of GPT-2 (a predecessor to modern LLMs). This project, referencing techniques developed by Carlini et al., aims to understand and mitigate privacy risks in LLMs, contributing to safer model deployment.
  • Fundamental Research on AI Measurements
    As part of NIST’s AI portfolio, this project conducts fundamental research to establish scientific foundations for measuring LLM performance, risks, and interactions. It includes developing evaluation metrics, benchmarks, and standards to ensure LLMs are reliable and trustworthy in diverse applications.
  • Adversarial Machine Learning (AML) Taxonomy for LLMs
    NIST developed a taxonomy of adversarial machine learning attacks, including those targeting LLMs, such as evasion, data poisoning, privacy, and abuse attacks. This project standardizes terminology and provides guidance to enhance LLM security against malicious manipulations, benefiting both cybersecurity and AI communities.
  • Use-Inspired AI Research for LLM Applications
    NIST’s AI portfolio includes use-inspired research to advance LLM applications across government agencies and industries. This project develops guidelines and tools to operationalize LLMs responsibly, focusing on practical implementations like text summarization, translation, and question-answering systems.

Remarks:

  • These projects reflect NIST’s focus on evaluating, standardizing, and securing LLMs rather than developing LLMs themselves. NIST’s role is to provide frameworks, guidelines, and evaluations to ensure trustworthy AI.
  • Some projects, like ARIA and AI RMF, are broad programs that encompass LLMs among other AI systems, but they include specific LLM-related evaluations or applications.

 

What is time?

“What then is time? If no one asks me, I know what it is.

If I wish to explain it to him who asks, I do not know.”

Saint Augustine (“Confessions” Book XI)

 

When did time zones become a thing?

Readings / Radio Controlled Clocks

Cloud Computing Paradigm

“The greatest danger in modern technology isn’t that machines will begin to think like people,
ut that people will begin to think like machines.”
— Michael Gazzaniga

NIST Cloud Computing Standards Roadmap

The “next big thing” reveals itself in hindsight.  Some areas of interest and potential advancements include:

  1. Edge Computing: Edge computing brings computation closer to the data source, reducing latency and bandwidth usage. It enables processing and analysis of data at or near the edge of the network, which is especially important for applications like IoT, real-time analytics, and autonomous systems.
  2. Quantum Computing: Quantum computing holds the promise of solving complex problems that are currently beyond the capabilities of classical computers. Cloud providers are exploring ways to offer quantum computing as a service, allowing users to harness the power of quantum processors.
  3. Serverless Computing: Serverless computing abstracts away server management, enabling developers to focus solely on writing code. Cloud providers offer Function as a Service (FaaS), where users pay only for the actual execution time of their code, leading to more cost-effective and scalable solutions.
  4. Multi-Cloud and Hybrid Cloud: Organizations are increasingly adopting multi-cloud and hybrid cloud strategies to avoid vendor lock-in, enhance resilience, and optimize performance by distributing workloads across different cloud providers and on-premises infrastructure.
  5. Artificial Intelligence and Machine Learning: Cloud providers are integrating AI and ML capabilities into their platforms, making it easier for developers to build AI-driven applications and leverage pre-built models for various tasks.
  6. Serverless AI: The combination of serverless computing and AI allows developers to build and deploy AI models without managing the underlying infrastructure, reducing complexity and operational overhead.
  7. Extended Security and Privacy: As data privacy concerns grow, cloud providers are investing in improved security measures and privacy-enhancing technologies to protect sensitive data and ensure compliance with regulations.
  8. Containerization and Kubernetes: Containers offer a lightweight, portable way to package and deploy applications. Kubernetes, as a container orchestration tool, simplifies the management of containerized applications, enabling scalable and resilient deployments.

 

Layout mode
Predefined Skins
Custom Colors
Choose your skin color
Patterns Background
Images Background
error: Content is protected !!
Skip to content