Category Archives: @NIST

Loading
loading...

Innovation and Competitiveness in Artificial Intelligence

The International Trade Administration (ITA) of the U.S. Department of Commerce (DOC) is requesting public comments to gain insights on the current global artificial intelligence (AI) market. Responses will provide clarity about stakeholder concerns regarding international AI policies, regulations, and other measures which may impact U.S. exports of AI technologies. Additionally, the request for information (RFI) includes inquiries related to AI standards development. ANSI encourages relevant stakeholders to respond by ITA’s deadline of October 17, 2022.

Fueling U.S. Innovation and Competitiveness in AI: Respond to International Trade Administration’s Request for Information

Commerce Department Launches the National Artificial Intelligence Advisory Committee

 

AI Risk Management Framework

 

We list notable NIST projects or efforts related to LLMs, based on available information from NIST’s publications and initiatives. These projects emphasize NIST’s role in advancing measurement science, standards, and guidelines for trustworthy AI systems, including LLMs. Note that some projects are specific studies, while others are broader programs that encompass LLMs.
  • Evaluating LLMs for Real-World Vulnerability Repair in C/C++ Code
    NIST conducted a study to evaluate the capability of advanced LLMs, such as ChatGPT-4 and Claude, in repairing memory corruption vulnerabilities in real-world C/C++ code. The project curated 223 code snippets with vulnerabilities like memory leaks and buffer errors, assessing LLMs’ proficiency in generating localized fixes. This work highlights LLMs’ potential in automated code repair and identifies limitations in handling complex vulnerabilities.
  • Translating Natural Language Specifications into Access Control Policies
    This project explores the use of LLMs for automated translation and information extraction of access control policies from natural language sources. By leveraging prompt engineering techniques, NIST demonstrated improved efficiency and accuracy in converting human-readable requirements into machine-interpretable policies, advancing automation in security systems.
  • Assessing Risks and Impacts of AI (ARIA) Program
    NIST’s ARIA program evaluates the societal risks and impacts of AI systems, including LLMs, in realistic settings. The program includes a testing, evaluation, validation, and verification (TEVV) framework to understand LLM capabilities, such as controlled access to privileged information, and their broader societal effects. This initiative aims to establish guidelines for safe AI deployment.
  • AI Risk Management Framework (AI RMF)
    NIST developed the AI RMF to guide the responsible use of AI, including LLMs. This framework provides a structured approach to managing risks associated with AI systems, offering tools and benchmarks for governance, risk assessment, and operationalizing trustworthy AI across various sectors. It’s widely applied in LLM-related projects.
  • AI Standards “Zero Drafts” Pilot Project
    Launched to accelerate AI innovation, this project focuses on developing AI standards, including those relevant to LLMs, through an open and collaborative process. It aims to create flexible guidelines that evolve with LLM advancements, encouraging input from stakeholders to ensure robust standards.
  • Technical Language Processing (TLP) Tutorial
    NIST collaborated on a TLP tutorial at the 15th Annual Conference of the Prognostics and Health Management Society to foster awareness and education on processing large volumes of text using machine learning, including LLMs. The project explored how LLMs can assist in content analysis and topic modeling for research and engineering applications.
  • Evaluation of LLM Security Against Data Extraction Attacks
    NIST investigated vulnerabilities in LLMs, such as training data extraction attacks, using the example of GPT-2 (a predecessor to modern LLMs). This project, referencing techniques developed by Carlini et al., aims to understand and mitigate privacy risks in LLMs, contributing to safer model deployment.
  • Fundamental Research on AI Measurements
    As part of NIST’s AI portfolio, this project conducts fundamental research to establish scientific foundations for measuring LLM performance, risks, and interactions. It includes developing evaluation metrics, benchmarks, and standards to ensure LLMs are reliable and trustworthy in diverse applications.
  • Adversarial Machine Learning (AML) Taxonomy for LLMs
    NIST developed a taxonomy of adversarial machine learning attacks, including those targeting LLMs, such as evasion, data poisoning, privacy, and abuse attacks. This project standardizes terminology and provides guidance to enhance LLM security against malicious manipulations, benefiting both cybersecurity and AI communities.
  • Use-Inspired AI Research for LLM Applications
    NIST’s AI portfolio includes use-inspired research to advance LLM applications across government agencies and industries. This project develops guidelines and tools to operationalize LLMs responsibly, focusing on practical implementations like text summarization, translation, and question-answering systems.

Remarks:

  • These projects reflect NIST’s focus on evaluating, standardizing, and securing LLMs rather than developing LLMs themselves. NIST’s role is to provide frameworks, guidelines, and evaluations to ensure trustworthy AI.
  • Some projects, like ARIA and AI RMF, are broad programs that encompass LLMs among other AI systems, but they include specific LLM-related evaluations or applications.

 

What is time?

“What then is time? If no one asks me, I know what it is.

If I wish to explain it to him who asks, I do not know.”

Saint Augustine (“Confessions” Book XI)

 

When did time zones become a thing?

Readings / Radio Controlled Clocks

Cloud Computing Paradigm

“The greatest danger in modern technology isn’t that machines will begin to think like people,
ut that people will begin to think like machines.”
— Michael Gazzaniga

NIST Cloud Computing Standards Roadmap

The “next big thing” reveals itself in hindsight.  Some areas of interest and potential advancements include:

  1. Edge Computing: Edge computing brings computation closer to the data source, reducing latency and bandwidth usage. It enables processing and analysis of data at or near the edge of the network, which is especially important for applications like IoT, real-time analytics, and autonomous systems.
  2. Quantum Computing: Quantum computing holds the promise of solving complex problems that are currently beyond the capabilities of classical computers. Cloud providers are exploring ways to offer quantum computing as a service, allowing users to harness the power of quantum processors.
  3. Serverless Computing: Serverless computing abstracts away server management, enabling developers to focus solely on writing code. Cloud providers offer Function as a Service (FaaS), where users pay only for the actual execution time of their code, leading to more cost-effective and scalable solutions.
  4. Multi-Cloud and Hybrid Cloud: Organizations are increasingly adopting multi-cloud and hybrid cloud strategies to avoid vendor lock-in, enhance resilience, and optimize performance by distributing workloads across different cloud providers and on-premises infrastructure.
  5. Artificial Intelligence and Machine Learning: Cloud providers are integrating AI and ML capabilities into their platforms, making it easier for developers to build AI-driven applications and leverage pre-built models for various tasks.
  6. Serverless AI: The combination of serverless computing and AI allows developers to build and deploy AI models without managing the underlying infrastructure, reducing complexity and operational overhead.
  7. Extended Security and Privacy: As data privacy concerns grow, cloud providers are investing in improved security measures and privacy-enhancing technologies to protect sensitive data and ensure compliance with regulations.
  8. Containerization and Kubernetes: Containers offer a lightweight, portable way to package and deploy applications. Kubernetes, as a container orchestration tool, simplifies the management of containerized applications, enabling scalable and resilient deployments.

 

Time & Frequency Services

The National Institute of Standards and Technology is responsible for maintaining and disseminating official time in the United States. While NIST does not have a direct role in implementing clock changes for daylight saving time, it does play an important role in ensuring that timekeeping systems across the country are accurate and consistent.

Prior to the implementation of daylight saving time, NIST issues public announcements reminding individuals and organizations to adjust their clocks accordingly. NIST also provides resources to help people synchronize their clocks, such as the time.gov website and the NIST radio station WWV.

In addition, NIST is responsible for developing and maintaining the atomic clocks that are used to define Coordinated Universal Time (UTC), the international standard for timekeeping. UTC is used as the basis for all civil time in the United States, and it is the reference time used by many systems, including the Global Positioning System (GPS) and the internet.

Overall, while NIST does not have a direct role in implementing clock changes for daylight saving time, it plays an important role in ensuring that timekeeping systems across the country are accurate and consistent, which is essential for the smooth implementation of any changes to the system.

CLICK IMAGE to access complete document

More

Time Realization and Distribution

Horologiorum

Optical Frequency Comb

Compact Chips Advance Precision Timing for Communications, Navigation and Other Applications

Shrinking Technology, Expanding Horizons: Complete Article

National Institute of Standards and Technology, Boulder, CO, USA

Igor Kudelin, et. al

Department of Physics, University of Colorado Boulder, Boulder, CO, USA

Abstract: Numerous modern technologies are reliant on the low-phase noise and exquisite timing stability of microwave signals. Substantial progress has been made in the field of microwave photonics, whereby low-noise microwave signals are generated by the down-conversion of ultrastable optical references using a frequency comb1,2,3. Such systems, however, are constructed with bulk or fibre optics and are difficult to further reduce in size and power consumption. In this work we address this challenge by leveraging advances in integrated photonics to demonstrate low-noise microwave generation via two-point optical frequency division4,5. Narrow-linewidth self-injection-locked integrated lasers6,7 are stabilized to a miniature Fabry–Pérot cavity8, and the frequency gap between the lasers is divided with an efficient dark soliton frequency comb9. The stabilized output of the microcomb is photodetected to produce a microwave signal at 20 GHz with phase noise of −96 dBc Hz−1 at 100 Hz offset frequency that decreases to −135 dBc Hz−1 at 10 kHz offset—values that are unprecedented for an integrated photonic system. All photonic components can be heterogeneously integrated on a single chip, providing a significant advance for the application of photonics to high-precision navigation, communication and timing systems.

 

Complete Article (PDF)

Layout mode
Predefined Skins
Custom Colors
Choose your skin color
Patterns Background
Images Background
error: Content is protected !!
Skip to content