Open Commission Meeting May 15

Loading
loading...

Information & Communication Technology Cabling

June 10, 2025
mike@standardsmichigan.com
,
No Comments

Balloting on the first stage of development of the 2023 National Electrical Code is underway now and will be completed by March 26th.  We collaborate with several experts in the IEEE who are the leading voices in standards setting for ICT infrastructure present in education communities.  The issues are  many and complex and fast-moving.   We provide transcripts and a sample of the issues that will determine the substance of the 2023 Edition.

Code Making Panel No. 3 Public Input Report

A sample of concepts in play:

Temperature limitations of Class 2 and Class 3 Cables

Fire resistive cabling systems

Multi-voltage (single junction, entry, pathway or connection) signaling control relay equipment

Listing of audio/video power-limited circuits

Code Making Panel No. 16 Public Input Report

A sample of concepts in play:

Definition of “Communication Utility”

Mechanical execution of work

Listed/Unlisted cables entering buildings

Underground communication cabling coordination with the National Electrical Safety Code

Public comment on the First Draft of the 2026 revision will be received until August 24, 2024.  We collaborate with the IEEE Education & Healthcare Facilities Committee which hosts open colloquia 4 times monthly in European and American time zones.   See our CALENDAR for the next online meeting; open to everyone.

"One day ladies will take their computers for walks in the park and tell each other, "My little computer said such a funny thing this morning" - Alan Turing

Large Language Model Standards

June 9, 2025
mike@standardsmichigan.com
No Comments

Perhaps the World Ends Here | Joy Harjo

 

The world begins at a kitchen table. No matter what, we must eat to live.
The gifts of earth are brought and prepared, set on the table.
So it has been since creation, and it will go on.
We chase chickens or dogs away from it. Babies teethe at the corners. They scrape their knees under it.
It is here that children are given instructions on what it means to be human.
We make men at it, we make women.
At this table we gossip, recall enemies and the ghosts of lovers.
Our dreams drink coffee with us as they put their arms around our children.
They laugh with us at our poor falling-down selves and as we put ourselves back together once again at the table.
This table has been a house in the rain, an umbrella in the sun.
Wars have begun and ended at this table. It is a place to hide in the shadow of terror.
A place to celebrate the terrible victory.
We have given birth on this table, and have prepared our parents for burial here.
At this table we sing with joy, with sorrow. We pray of suffering and remorse. We give thanks.
Perhaps the world will end at the kitchen table, while we are laughing and crying, eating of the last sweet bite.

 

Standards and benchmarks for evaluating large language models (LLMs). Some of the most commonly used benchmarks and standards include:

  1. GLUE (General Language Understanding Evaluation): GLUE is a benchmark designed to evaluate and analyze the performance of models across a diverse range of natural language understanding tasks, such as text classification, sentiment analysis, and question answering.
  2. SuperGLUE: SuperGLUE is an extension of the GLUE benchmark, featuring more difficult language understanding tasks, aiming to provide a more challenging evaluation for models.
  3. CoNLL (Conference on Computational Natural Language Learning): CoNLL has historically hosted shared tasks, including tasks related to coreference resolution, dependency parsing, and other syntactic and semantic tasks.
  4. SQuAD (Stanford Question Answering Dataset): SQuAD is a benchmark dataset for evaluating the performance of question answering systems. It consists of questions posed on a set of Wikipedia articles, where the model is tasked with providing answers based on the provided context.
  5. RACE (Reading Comprehension from Examinations): RACE is a dataset designed to evaluate reading comprehension models. It consists of English exam-style reading comprehension passages and accompanying multiple-choice questions.
  6. WMT (Workshop on Machine Translation): The WMT shared tasks focus on machine translation, providing benchmarks and evaluation metrics for assessing the quality of machine translation systems across different languages.
  7. BLEU (Bilingual Evaluation Understudy): BLEU is a metric used to evaluate the quality of machine-translated text relative to human-translated reference texts. It compares n-gram overlap between the generated translation and the reference translations.
  8. ROUGE (Recall-Oriented Understudy for Gisting Evaluation): ROUGE is a set of metrics used for evaluating automatic summarization and machine translation. It measures the overlap between generated summaries or translations and reference summaries or translations.

These benchmarks and standards play a crucial role in assessing the performance and progress of large language models, helping researchers and developers understand their strengths, weaknesses, and areas for improvement.

Yann Lecun & Lex Fridman: Limits of LLMs

New topic for us; time only to cover the basics.  We have followed language, generally, however — every month — because best practice discovery and promulgation in conceiving, designing, building, occupying and maintaining the architectural character of education settlements depends upon a common vocabulary.  The struggle to agree upon vocabulary presents an outsized challenge to the work we do.

Large language models hold significant potential for the building construction industry by streamlining various processes. They can analyze vast amounts of data to aid in architectural design, structural analysis, and project management. These models can generate detailed plans, suggest optimized construction techniques, and assist in cost estimation. Moreover, they facilitate better communication among stakeholders by providing natural language interfaces for discussing complex concepts. By harnessing the power of large language models, the construction industry can enhance efficiency, reduce errors, and ultimately deliver better-designed and more cost-effective buildings.

Join us today at the usual hour.  Use the login credentials at the upper right of our home page.

Related:

print(“Python”)

Standards January: Language

Standard for Large Language Model Agent Interface

 

History of the English Speaking Peoples

June 9, 2025
mike@standardsmichigan.com
, , , , ,
No Comments

Michigan Central

Since so much of what we do in standards setting is built upon a foundation of a shared understanding and agreement of the meaning of words (no less so than in technical standard setting) that time is well spent reflecting upon the origin of the nouns and verbs of that we use every day.   Best practice cannot be discovered, much less promulgated, without its understanding secured with common language.

Word Counts

2024 Alumni Awards

Cambridge: English language education in the era of generative AI

AI Risk Management Framework

June 9, 2025
mike@standardsmichigan.com
No Comments

 

We list notable NIST projects or efforts related to LLMs, based on available information from NIST’s publications and initiatives. These projects emphasize NIST’s role in advancing measurement science, standards, and guidelines for trustworthy AI systems, including LLMs. Note that some projects are specific studies, while others are broader programs that encompass LLMs.
  • Evaluating LLMs for Real-World Vulnerability Repair in C/C++ Code
    NIST conducted a study to evaluate the capability of advanced LLMs, such as ChatGPT-4 and Claude, in repairing memory corruption vulnerabilities in real-world C/C++ code. The project curated 223 code snippets with vulnerabilities like memory leaks and buffer errors, assessing LLMs’ proficiency in generating localized fixes. This work highlights LLMs’ potential in automated code repair and identifies limitations in handling complex vulnerabilities.
  • Translating Natural Language Specifications into Access Control Policies
    This project explores the use of LLMs for automated translation and information extraction of access control policies from natural language sources. By leveraging prompt engineering techniques, NIST demonstrated improved efficiency and accuracy in converting human-readable requirements into machine-interpretable policies, advancing automation in security systems.
  • Assessing Risks and Impacts of AI (ARIA) Program
    NIST’s ARIA program evaluates the societal risks and impacts of AI systems, including LLMs, in realistic settings. The program includes a testing, evaluation, validation, and verification (TEVV) framework to understand LLM capabilities, such as controlled access to privileged information, and their broader societal effects. This initiative aims to establish guidelines for safe AI deployment.
  • AI Risk Management Framework (AI RMF)
    NIST developed the AI RMF to guide the responsible use of AI, including LLMs. This framework provides a structured approach to managing risks associated with AI systems, offering tools and benchmarks for governance, risk assessment, and operationalizing trustworthy AI across various sectors. It’s widely applied in LLM-related projects.
  • AI Standards “Zero Drafts” Pilot Project
    Launched to accelerate AI innovation, this project focuses on developing AI standards, including those relevant to LLMs, through an open and collaborative process. It aims to create flexible guidelines that evolve with LLM advancements, encouraging input from stakeholders to ensure robust standards.
  • Technical Language Processing (TLP) Tutorial
    NIST collaborated on a TLP tutorial at the 15th Annual Conference of the Prognostics and Health Management Society to foster awareness and education on processing large volumes of text using machine learning, including LLMs. The project explored how LLMs can assist in content analysis and topic modeling for research and engineering applications.
  • Evaluation of LLM Security Against Data Extraction Attacks
    NIST investigated vulnerabilities in LLMs, such as training data extraction attacks, using the example of GPT-2 (a predecessor to modern LLMs). This project, referencing techniques developed by Carlini et al., aims to understand and mitigate privacy risks in LLMs, contributing to safer model deployment.
  • Fundamental Research on AI Measurements
    As part of NIST’s AI portfolio, this project conducts fundamental research to establish scientific foundations for measuring LLM performance, risks, and interactions. It includes developing evaluation metrics, benchmarks, and standards to ensure LLMs are reliable and trustworthy in diverse applications.
  • Adversarial Machine Learning (AML) Taxonomy for LLMs
    NIST developed a taxonomy of adversarial machine learning attacks, including those targeting LLMs, such as evasion, data poisoning, privacy, and abuse attacks. This project standardizes terminology and provides guidance to enhance LLM security against malicious manipulations, benefiting both cybersecurity and AI communities.
  • Use-Inspired AI Research for LLM Applications
    NIST’s AI portfolio includes use-inspired research to advance LLM applications across government agencies and industries. This project develops guidelines and tools to operationalize LLMs responsibly, focusing on practical implementations like text summarization, translation, and question-answering systems.

Remarks:

  • These projects reflect NIST’s focus on evaluating, standardizing, and securing LLMs rather than developing LLMs themselves. NIST’s role is to provide frameworks, guidelines, and evaluations to ensure trustworthy AI.
  • Some projects, like ARIA and AI RMF, are broad programs that encompass LLMs among other AI systems, but they include specific LLM-related evaluations or applications.

 

LLM Model Evaluation & Agent Interface

June 9, 2025
mike@standardsmichigan.com
No Comments

IEEE sponsors two AI and ADS projects that follow ANSI standardization requirements:

Title: IEEE P3119 – Standard for the Procurement of Artificial Intelligence and Automated Decision Systems

Scope: The IEEE P3119 standard establishes a uniform set of definitions and a process model for procuring Artificial Intelligence (AI) and Automated Decision Systems (ADS). It covers government procurement, in-house development, and hybrid public-private development of AI/ADS. The standard redefines traditional procurement stages—problem definition, planning, solicitation, critical evaluation (e.g., impact assessments), and contract execution—using an IEEE Ethically Aligned Design (EAD) foundation and a participatory approach to address socio-technical and responsible innovation considerations. It focuses on mitigating unique AI risks compared to traditional technologies and applies to commercial AI products and services procured through formal contracts.

Purpose: The purpose of IEEE P3119 is to help government entities, policymakers, and technologists make transparent, accountable, and responsible choices in procuring AI/ADS. It provides a framework to strengthen procurement processes, ensuring due diligence, transparency about risks, and alignment with public interest. The standard aims to minimize AI-related risks (e.g., bias, ethical concerns) while maximizing benefits, complementing existing procurement practices and shaping the market for responsible AI solutions. It supports agencies in critically evaluating AI tools, assessing vendor transparency, and integrating ethical considerations into procurement.

Developmental Timelines:

    • September 23, 2021: The IEEE Standards Association (SA) Standards Board approved the project and established the IEEE P3119 Working Group. The Project Authorization Request (PAR) was created to define the scope.
    • 2021–Ongoing: Development continues, with no final publication date confirmed in available sources. As of July 18, 2024, the standard was still in progress, focusing on detailed process recommendations.
    • The standard is being developed as a voluntary socio-technical standard, with plans to test it against existing regulations (e.g., via regulatory sandboxes).

By Whom:

    • Working Group Chair: Gisele Waters, Ph.D., Director of Service Development and Operations at Design Run Group, co-founder of the AI Procurement Lab, and a human-centered design researcher focused on risk mitigation for vulnerable populations.
    • Working Group Vice Chair: Cari Miller, co-founder of the AI Procurement Lab and the Center for Inclusive Change, an AI governance leader and risk expert.
    • IEEE P3119 Working Group: Comprises a global network of IEEE SA volunteers from diverse industries, collaborating to develop standards addressing market needs and societal benefits. The group integrates expertise from government workers, policymakers, and technologists.
    • Inspiration: The standard was inspired by the AI and Procurement: A Primer report from the New York University Center for Responsible AI.

The IEEE P3119 standard is a collaborative effort to address the unique challenges of AI procurement, emphasizing ethical and responsible innovation for public benefit

Title: IEEE P3120 – Standard for Quantum Computing Architecture

Scope: The IEEE P3120 standard defines a general architecture for quantum computers, focusing on the structure and organization of quantum computing systems. It covers the overall system architecture, including quantum hardware components (e.g., qubits, quantum gates), control systems, interfaces with classical computing systems, and software layers for programming and operation. The standard aims to provide a framework for designing interoperable and scalable quantum computing systems, addressing both hardware and software considerations for quantum and hybrid quantum-classical architectures.

Purpose: The purpose of IEEE P3120 is to establish a standardized framework to guide the design, development, and integration of quantum computing systems. It seeks to ensure consistency, interoperability, and scalability across quantum computing platforms, facilitating innovation and collaboration in the quantum computing ecosystem. By providing clear architectural guidelines, the standard supports developers, researchers, and industry stakeholders in building reliable and efficient quantum computers, bridging the gap between theoretical quantum computing and practical implementation.

Developmental Timelines:

    • September 21, 2023: The IEEE Standards Association (SA) Standards Board approved the Project Authorization Request (PAR) for P3120, initiating the project under the IEEE Computer Society’s Microprocessor Standards Committee (C/MSC).
    • 2023–Ongoing: Development is in progress, with no confirmed publication date in available sources. As a standards development project, it involves iterative drafting, review, and consensus-building, typical of IEEE processes, which can span several years.
    • The standard is being developed as a voluntary standard, with potential for testing and refinement through industry and academic collaboration.

By Whom:

    • Sponsor: IEEE Computer Society, specifically the Microprocessor Standards Committee (C/MSC), which oversees standards related to microprocessor and computing architectures.
    • Working Group: The IEEE P3120 Working Group consists of volunteers from academia, industry, and research institutions with expertise in quantum computing, computer architecture, and related fields. Specific chairs or members are not detailed in available sources, but IEEE SA working groups typically include global experts from relevant domains.
    • Stakeholders: The development involves contributions from quantum computing researchers, hardware manufacturers, software developers, and standardization experts to ensure a comprehensive and practical standard.

The IEEE P3120 standard is a critical step toward formalizing quantum computing architectures, aiming to support the growing quantum technology industry with a robust and interoperable framework.

 

Update Status

June 9, 2025
mike@standardsmichigan.com
No Comments

We continue sorting through anomalies with Godaddy Tech Support to resolve Standards Michigan requirement for frequent and timely updates across all of our platforms.  The problem apparently lies with legacy plug-ins and widgets not yet caught up with the most recent WordPress release. 6.8.1 dated May 7th.  Our normal course of business is not effected but some of the visual features will be looking a bit janky until we get it fixed.  To wit:

The Weblizar slider plug in we have used for 10+ year seems to have fallen off the beaten path.  Our frequent visitors and clients will notice the ugly black background and small text.  We hope we can continue working with Weblizar if they can restore the customization feature of their plug in.

  • Posts are not updating across all platforms — particularly on X on iPhones.  Usually a caching problem and not one we haven’t seen before.
  • Some images will not center.
  • Footer and right-side widgets not loading properly.
  • Some, not all, slider images are not loading at high resolution.

The good news is that all our content, including media, survived the WordPress upgrade.  The next step in our “GoDaddy Journey” will be another PHP upgrade.  There will likely be surprises but none that we cannot handle.

In any case, timeliness and normal content flow has not been interrupted.   Much like the hardware in ICT software must also be maintained.

This page will be posted to our X-feed: @StandardsMich to remind our colleagues and followers that software needs to be “maintained”

Places of Worship

June 8, 2025
mike@standardsmichigan.com
,
No Comments

“The Church is not a gallery for the exhibition of eminent Christians,

but a school for the education of imperfect ones.”

— Henry Ward Beecher

WEBCAST Committee Action Hearings, Group A #2

 

2024 International Building Code: Chapter 3 Occupancy Classification and Use

In the International Code Council catalog of best practice literature we find the first principles for safety in places of worship tracking in the following sections of the International Building Code (IBC):

Section 303 Assembly Group A

“303.1.4:  Accessory religious educational rooms and religious auditoriums with occupant loads less than 100 per room or space are not considered separate occupancies.”   This informs how fire protection systems are designed.

Section 305 Educational Group E

“305.2.1: Rooms and spaces within places of worship proving such day care during religious functions shall be classified as part of the primary occupancy.”  This group includes building and structures or portions thereof occupied by more than five children older than 2-1/2 years of age who receive educational, supervision or personal care services for fewer than 24 hours per day.

Section 308 Institutional Group I

“308.5.2: Rooms and spaces within places of religious worship providing [Group I-4 Day Care Facilities] during religious functions shall be classified as part of the primary occupancy.   When [Group I-4 Day Care Facilities] includes buildings and structures occupied by more than five persons of any age who receive custodial care for fewer than 24 hours per day by persons other than parents or guardians, relatives by blood, marriage or adoption, and in a place other than the home of the person cared for.

Tricky stuff — and we haven’t even included conditions under which university-affiliated places of worship may expected to be used as community storm shelters.

"This We'll Defend."

2024/2025/2026 ICC CODE DEVELOPMENT SCHEDULE

Public response to Committee Actions taken in Orlando in April will be received until July 8th.

Because standard development tends to be a backward-looking domain it is enlightening to understand the concepts in play in previous editions.  The complete monograph of proposals for new building safety concepts for places of worship for the current revision cycle is linked below:

 2021/2022 Code Development: Group B

A simple search on the word “worship” will reveal what ideas are in play.  With the Group B Public Comment Hearings now complete ICC administered committees are now curating the results for the Online Governmental Consensus Vote milestone in the ICC process that was completed December 6th.   Status reports are linked below:

2018/2019 Code Development: Group B

Note that a number of proposals that passed the governmental vote are being challenged by a number of stakeholders in a follow-on appeals process:

2019 Group B Appeals

A quick review of the appeals statements reveals some concern over process, administration and technical matters but none of them directly affect how leading practice for places of worship is asserted.

We are happy to get down in the weeds with facility professionals on other technical issues regarding other occupancy classes that are present in educational communities.   See our CALENDAR for next Construction (Ædificare) colloquium open to everyone.

Issue: [17-353]

Category: Chapels

Colleagues: Mike Anthony, Jack Janveja, Richard Robben, Larry Spielvogel


More

Fashion Technology

June 6, 2025
mike@standardsmichigan.com
, , ,
No Comments

Art presents a different way of looking at things than science; 

one which preserves the mystery of things without undoing the mystery.

Sir Roger Scruton


Garment Industry Standards

Gallery: School Uniforms

Textiles

Art, Design & Fashion Studios

Layout mode
Predefined Skins
Custom Colors
Choose your skin color
Patterns Background
Images Background
error: Content is protected !!
Skip to content