FORUM TERATEC 2024

Media Library

ENTENDU SUR LE FORUM : "Create a novel data management and storage platform for exascale computing "

The IO Software for Exascale Architecture (IO-SEA) project is part of EuroHPC and is being pursued collaboratively across Europe by partners Atos, CEA, CEITEC, ECMWF, FZJ, ICHEC, IT4I, JGU Mainz, KTH, ParTec and Seagate. It aims to create a novel data management and storage platform for exascale computing based on Hierarchical Storage Management (HSM) and on-demand provisioning of storage services.
> Lire l'article

ENTENDU SUR LE FORUM : "Detecting periodic I/O behavior using frequency techniques"

This project is about detecting periodic I/O behavior using frequency techniques. It is a collaboration between the Technical University of Darmstadt and the University of Bordeaux-Inria. It also is part of the DEEP SEA project as well as of the Admire project.
> Lire l'article

ENTENDU SUR LE FORUM : "Biomemory développe une solution de stockage sur ADN"

Pourquoi l’ADN ? Car les supports de stockage traditionnels – disques durs, SSD et bandes magnétiques – ne peuvent plus suivre l’explosion du nombre de données. Il est estimé que de 30% de données conservées aujourd’hui, on passera à seulement 3% à l’avenir, forçant les entreprises à faire des arbitrages de plus en plus difficiles. Ces supports de stockage sont également fragiles. Les disques durs et SSD doivent être renouvelés tous les trois à quatre ans.
> Lire l'Article

Accélérer l'innovation: pourquoi cela vaut-il la peine de mettre à niveau votre matériel informatique?

Sur le marché concurrentiel d'aujourd'hui, votre entreprise doit faire des investissements technologiques stratégiques pour garantir l'efficacité, la sécurité et la compétitivité à long terme. De nombreuses entreprises négligent d'investir dans du nouveau matériel, tel que des serveurs, par crainte du coût initial. Pourtant, le fait de ne pas mettre à niveau et de ne pas entretenir les serveurs peut souvent entraîner des coûts d'investissement et d'exploitation plus élevés. Cela signifie que les dépenses quotidiennes associées à un matériel obsolète peuvent rapidement dépasser les temps d'arrêt à court terme ou l'investissement initial nécessaire pour un nouveau système.
En savoir plus

Hannover Messe 2023: Hewlett Packard Enterprise and Aleph Alpha demonstrate generative AI for manufacturing 

A la Foire de Hanovre, en Allemagne, le groupe américain HPE a présenté un prototype d'assistant virtuel doté d'une IA générative, capable de dialoguer avec les employés d'une usine et de les aider à résoudre des problèmes techniques. Le prototype utilise le langage et les images pour communiquer avec les humains et s'appuie sur le robot conversationnel développé par la start-up allemande Aleph Alpha. Cette solution innovante illustre le potentiel de l'IA générative, qui apprend de données existantes pour générer de nouveaux contenus. 
> En savoir plus

Nimbix Federated: a comprehensive architecture for secure, cloud-managed, multi-site supercomputing as-a-service

High-performance computing (HPC) is a complex ecosystem that is increasingly being democratized by cloud and cloud-like technologies. This increased accessibility enables engineers and scientists to consume highly advanced resources without extensive IT expertise, advancing the digital transformation of the physical world. 
> Lire l'article

Alain Aspect s'exprime sur la place de la France dans la course au quantique, de la recherche aux start-up

Le professeur Alain Aspect, co-lauréat du prix nobel de physique 2022 pour ses travaux sur l’intrication quantique, a pris la parole à l'occasion du Forum Teratec 2023, évènement dédié au numérique de grande puissance. Il y a évoqué la place de la France dans le domaine quantique face entre autres aux grandes puissances mondiales. Quelles priorités pour la recherche fondamentale ? Quelles stratégies adopter pour accélérer l'industrialisation de ces technologies ? Découvrez son intervention en vidéo.
Lire l'article

IA générative, quantique, PME… Le Forum Teratec entame sa 18e édition au Parc floral de Paris

Pour sa 18e édition, le Forum Teratec se déroule au Parc floral de Paris. La grand-messe des professionnels du numérique – utilisateurs comme fournisseurs – s’intéressera aux dernières actualités du secteur, essor de l’IA générative et anticipation du calcul quantique en tête.
Lire l'article

Comment les supercalculateurs cherchent à rentabiliser leur impact environnemental

Une fabrication coûteuse pour la planète et un fonctionnement gourmand en énergie : les supercalculateurs ont un impact climatique indéniable. Les acteurs du secteur tentent de le compenser, en mettant en avant les calculs environnementaux et en cherchant à maximiser les performances énergétiques de leurs processeurs.
Lire l'article

Au Forum Teratec, le numérique de puissance entre accélération et souveraineté

L’ouverture du Forum Teratec, qui se tient les 31 mai et 1er juin au Parc floral de Vincennes, s'est placée sous le signe de l’accélération du numérique de puissance et de la régulation croissante associée à l'enjeu de souveraineté.
Lire l'article

Forum Teratec 2023 : trois contributions exemplaires du calcul intensif aux sciences de la Terre et du climat

Un jumeau numérique complet de nos océans, un modèle capable de prévoir et d’estimer l’ampleur des inondations, des mesures spatio-temporelles précises de la déformation de la Terre : voici trois utilisations marquantes du calcul intensif, présentées à l’occasion du Forum Teratec ce mercredi 31 mai 2023.
Lire l'article

Créer un centre d'excellence AI/HPC

Launching a new strategic AI or HPC program can be a daunting prospect, but the opportunities for industry, commerce and government to transform research and innovation make a compelling investment. 
> Read

Optimisation du Calcul Scientifique grâce à la convergence de la simulation numérique, du HPC et de l’IA  

Le calcul haute performance (HPC) joue un rôle essentiel dans l'Intelligence Artificielle (IA), la simulation et les jumeaux numériques en fournissant la puissance de calcul et l'infrastructure nécessaire pour effectuer des calculs complexes et gourmands en données. 
> Read

The CCRT, a computing centre serving industry for 20 years

For twenty years now, the CEA and its partners have been sharing increasingly powerful machines to meet industrial or research needs in HPC, data processing and AI! The success of the CCRT is based on a solid, long-term relationship of trust, established on the basis of a multi-year commitment and a dynamic of exchanges (training, technological and scientific seminars).
> Read

Intel is at the forefront of developing new technologies.

 We work relentlessly to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges with the help of the five superpowers: ubiquitous computing, pervasive connectivity, cloud-to-edge infrastructure, artificial intelligence, and sensing.
> Read

#SIMULATION

Comment simulation numérique et intelligence artificielle combinent leurs forces

Les modèles d'intelligence artificielle basés sur le machine learning (l'apprentissage automatique) font leur entrée dans le monde de la simulation numérique, enrichissant ou accélérant les calculs. Les industriels et les chercheurs explorent les possibilités de cette hybridation prometteuse.
> Lire l'article

#IA

L’hybridation avec l'IA fera partie de la panoplie des ingénieurs en simulation 

Directeur de recherche à l'Inria et spécialiste de l'intelligence artificielle, Marc Schoenauer souligne pour Industrie & Technologies le dynamisme de la recherche sur l'hybridation entre machine learning et simulation numérique et en dresse les perspectives.
> Lire l'article

#HYBRIDATION 

Comment l’industrie va booster ses méthodes de conception grâce à l'hybridation IA-Simulation 

Aidés par des organismes de recherche publics et privés, les industriels ont commencé à tester les modèles de simulation hybridés avec l'intelligence artificielle. Les bénéfices attendus – réduction du temps de calcul, gain de précision – permettront d’améliorer la mise au point et la fabrication de nouveaux produits et équipements.
> Lire l'article

#JUMEAU NUMERIQUE 

Comment le jumeau numérique du réacteur à fusion ITER est mis en oeuvre  

A l’initiative de l’éditeur de logiciels d’ingénierie Bentley Systems et du bureau d’études Brigantium Engineering, le jumeau numérique du projet ITER est en cours de développement depuis le printemps 2022. Cette représentation 3D devrait assister les ingénieurs dans la construction de ce gigantesque réacteur à fusion, qui réclame une organisation et une précision sans faille.
> Lire l'article

#METAVERS

Superposé au jumeau numérique, le métavers industriel promet une gestion optimale du fonctionnement de l'usine en incluant le facteur humain. Microsoft et Kawasaki Heavy Industries, Siemens et Nvidia pour BMW... Les premières offres et expérimentations se déploient.
> Lire l'article

#START UP

Les qubits de spin d’électron montent en puissance, la France se lance avec Siquance

Misant sur les technos de la microélectronique, les qubits sur silicium se posent en challengers des qubits supraconducteurs et à ions piégés. La France entre dans la course avec la start-up Siquance, lancée ce 29 novembre et qui s'inscrit dans la continuité des travaux pionniers du groupe de recherche Quantum Silicon Grenoble.
> Lire l'article

#QUANTIQUE

Quand industriels et start-up explorent les usages des premières machines quantiques 

L'ordinateur quantique universel reste un rêve, mais des prototypes de machines, certes imparfaites et limitées, sont déjà disponibles. Plusieurs industriels travaillent avec les start-up qui les développent pour explorer les applications de cette technologie de rupture.
> Lire l'article

#DEEPTECH

La deeptech Pasqal signe avec BMW pour appliquer le calcul quantique au formage des métaux 

La deeptech française Pasqal a annoncé le 11 mai un partenariat avec BMW pour mettre son calculateur quantique au service de la simulation numérique du comportement mécanique de pièces métalliques. La start-up mettra en œuvre un algorithme de la néerlandaise Qu&Co, achetée par Pasqal en janvier et qui avait remporté un défi du BMW Quantum Computer Challenge en décembre 2021.
> Lire l'article

#QUANTIQUE 

Comment Intel compte gagner la bataille de l'ordinateur quantique

Les progrès dans le domaine de l'informatique quantique ont été nombreux ces dernières années. L'avènement d'une machine commerciale réellement transformatrice semble pourtant encore diffus. L'un des acteurs les plus discrets du secteur, le fondeur américain Intel, table sur 2030. Et compte bien sortir le vainqueur de cette course qu'il voit comme un marathon et pas un sprint. Tour d'horizon avec James Clarke, directeur du matériel quantique chez Intel.
> Lire l'article

#RISC-V

HPC : L'Union européenne débloque 270 millions d'euros pour développer l'écosystème RISC-V

L'UE semble vouloir passer aux choses sérieuses concernant le développement de capacités souveraines de calcul haute performance. Un appel à projet doté d'un budget de 270 millions d'euros sera lancé en janvier pour faire foisonner un écosystème matériel et logiciel autour de l'architecture ouverte RISC-V.

#DIGITAL TWIN

[CES 2023] Comment Dassault Systèmes compte révolutionner la santé avec ses jumeaux numériques

Le champion français du numérique Dassault Systèmes a profité du CES de Las Vegas pour présenter ses “jumeaux numériques” du cœur et du cerveau humain, destinés à faire progresser la recherche médicale et à aider les médecins à mieux comprendre et accompagner chacun de leurs patients.
> Lire l'article

#IA

[L'instant tech] Ces industriels qui économisent de l'énergie grâce à la data et l'IA

Sommes de réaliser des économies d'énergie d'ici à l'automne par le gouvernement, plusieurs industriels se sont tournés vers l'intelligence artificielle, qui ont mis en rapport les données de production et les cours de l'énergie. A l'occasion du Sido, le salon de l'IA et des objets connectés qui se tient à Lyon les 14 et 15 septembre, L'Usine Nouvelle revient sur ces solutions, qui permettent de réaliser jusqu'à 10% d'économies.
> Lire l'article

#REALITEVIRTUELLE

[L'instant tech] Comment l'holographie ouvre la voie à la (vraie) 3D dans la réalité virtuelle

L'institut de recherche technologique b<>com s'intéresse de près à l'holographie, une technologie qui permet d'afficher des objets en trois dimensions dans l'espace et pourrait remplacer les illusions utilisées dans le cinéma 3D ou la réalité virtuelle , souvent synonymes d'inconfort. Son premier prototype – un casque de réalité virtuelle – démontre sa faisabilité.
> Lire l'article

[L'instant tech] Conception, ergonomie, formation… Avec ses usines virtuelles, Skyreal s'approche du « métavers industriel »



  

[L'instant tech] Exaion, la filiale d'EDF qui automatise les projets blockchain et se veut bas carbone

La filiale blockchain d'EDF, Exaion, a été présentée le 12 décembre sur la plateforme Exaion Node, qui vise à faciliter le développement par les entreprises de projets basés sur la technologie des chaînes de blocs. Elle met sur son mix énergétique bas carbone et des protocoles peu énergivores pour en limiter l'impact environnemental. 
> Lire l'article

#EUROPE

Atos à l’affût des solutions pour construire des supercalculateurs embarquant des technologies européennes

Alors qu’il dépend aujourd’hui exclusivement de processeurs américains pour ses supercalculateurs, Atos suit de près les initiatives de développement de processeurs européens. Le constructeur français pourrait commencer à embarquer ces technologies Made in Europe en 2023. De quoi répondre à la montée des exigences de souveraineté dans le calcul intensif tant chez les Etats que chez les grands industriels.
> Lire l'article


The IO Software for Exascale Architecture (IO-SEA) project is part of EuroHPC and is being pursued collaboratively across Europe by partners Atos, CEA, CEITEC, ECMWF, FZJ, ICHEC, IT4I, JGU Mainz, KTH, ParTec and Seagate. It aims to create a novel data management and storage platform for exascale computing based on Hierarchical Storage Management (HSM) and on-demand provisioning of storage services. One of its core tools is an API for the storage system that allows user applications to do semantic tagging of the data. It’s called Data Access and Storage application Interface (DASI). Another key element is ephemeral data lifecycle management, where storage provisioning services are brought up as directed by a user-specified workflow. This is achieved using data nodes that can be summoned dynamically as needed, unlike traditional fixed endpoints. They are also compatible with a variety of compute architectures (many-core CPU, CPU and GPU, EPI...), and multiple data nodes can serve independent compute nodes. These nodes work with multi-tiered storage that goes from NVMe devices to SSD and HDD, all the way to tapes for longer term storage. While they’re slower, they’re also cheaper and have a larger capacity. The goal of the IO-SEA project is to integrate them in a way that's performant and seamless for the user, which is where hierarchical storage management comes into play. Using a HSM solution for object stores makes it possible to use NVMe devices, SSD, HDD and tapes inside the same storage tier, therefore making it possible to complete the entire data lifecycle inside the same system. That’s because these tiers are abstractions of a physical storage system that mix multiple storage types, much like an object store is an abstraction to facilitate the storing and retrieval of unstructured data. Those tiers are characterized by things like bandwidth, capacity, reliability, which is bundled as metadata into the HSM object as it's being passed around the system. This way, pieces of the data can exist on different tiers as needed by the system. A key component of the IO-SEA system is Hestia, a library that allows the migration of data between multiple object stores in different network locations via a distributed copy tool. Two types of object stores are supported. The first is Motr, developed by Seagate. It’s a distributed object store targeted and built for exoscale applications, and it uses NVRAM, SSD & HDD. The second is Phobos, developped by the CEA, which is used for tapes and is also relevant for exoscale. The challenge lies in getting these two store types to talk to each other in a way that the rest of the system doesn't have to care about it. And the goal is to have not just data stores, but also policy engines when moving the data and receiving user instructions. This is complemented by instrumentation tools that monitor I/O behaviors to provide feedback and model data workflows. At this point, the IO-SEA project is two years old with one year to go before the first meaningful release. The teams are finishing up each component and getting them all to work together. One of the main focuses is deploying the system on a prototype exascale cluster to test it. A few real applications are being used with it, like weather modeling and quantum chromodynamics. It is all open source, so interested parties are welcome to take a look at the github repositories.


This project is about detecting periodic I/O behavior using frequency techniques. It is a collaboration between the Technical University of Darmstadt and the University of Bordeaux-Inria. It also is part of the DEEP SEA project as well as of the Admire project. Applications tend to alternate between compute and I/O phases. And while compute phases are usually allocated to a user exclusively, I/O, which is a shared resource, has several problems like variability, contention and low utilization. The project demonstrates an approach which could decrease I/O contention as a step towards exascale systems. By knowing the temporal I/O behavior of the system you could know or how much resources your application requires and for how long, as well as whether it's okay to share them with others or not. I/O tends to be periodic in HPC (periodicity being the time between the start of two consecutive I/O phases), and this can be used for contention avoidance strategies. This is helpful for HPC to help enhance the system throughput, for example by improving I/O scheduling, burst buffers management, I/O-aware batch scheduling and so on. However in practice it’s difficult to precisely define the time between the end of one phase and the start of another because the thresholds you choose are applicaton and system dependent. The project uses frequency techniques to better detect these thresholds. Its approach is implemented in four steps: First, the fluctuation of the bandwidth over time is treated as a discrete signal. Second, a discrete Fourier transformation (DFT) is applied on that signal. Third, one must find the dominant frequency which tells you what the periodicity of the I/O is. Last, it produces a confidence metric that can be used to assess the exactitude of the measurement Using that method, not only can we predict the periodicity, but also the presence of I/O. It can be performed offline as well as online, during the execution of a job, which is preferable as it gives more accurate results. There are also several parameters you can tune for optimal detection and performance. The work will be made publically available next month for those who wish to test it. In the future, it will be integrated into the Admire project, which has a monitoring framework. As for the next step of the project, it will be to examine different processing techniques, like the wavelet transformation.


Biomemory développe une solution de stockage sur ADN. Pourquoi l’ADN ? Car les supports de stockage traditionnels – disques durs, SSD et bandes magnétiques – ne peuvent plus suivre l’explosion du nombre de données. Il est estimé que de 30% de données conservées aujourd’hui, on passera à seulement 3% à l’avenir, forçant les entreprises à faire des arbitrages de plus en plus difficiles. Ces supports de stockage sont également fragiles. Les disques durs et SSD doivent être renouvelés tous les trois à quatre ans. L’ADN a trois avantages qui lui permettent de répondre à ces problématiques. Sa stabilité, 10 000 fois supérieure aux supports actuels. Son inertie, car il peut durer 50 000 ans tant qu’on le préserve de trois facteurs : les UVs, l’oxygène et les solvants. Et surtout son extraordinaire densité. En théorie, 100 grammes d’ADN peuvent contenir 100 zettaoctets, soit l’ensemble des données conservées dans le monde aujourd’hui. En pratique, cela prendrait la taille de quelques baies de stockages. Le problème est que la synthétisation classique de l’ADN est trop complexe et coûteuse pour qu’il puisse être utilisé de cette manière. Biomemory a donc adopté une approche novatrice qui réutilise au maximum les phénomènes du vivants. Les autres entreprises du secteur synthétisent l’ADN par la chimie, de la même manière que les laboratoires pharmaceutiques, produisant des millions de brins d’ADN très courts (oligos) et désorganisés qu’il est difficile d’exploiter. Biomemory s’appuie sur la biologie de synthèse pour créer des plasmides. Ces polymères circulaires, plus simples que les chromosomes, sont très utilisés dans les laboratoires du monde entier. Ils sont déjà beaucoup plus gros que les oligos à la base, et comme la matière première est le sucre, il est facile de les dupliquer à répétition, puis de les assembler grâce à des enzymes pour créer au final d’énormes molécules. La stratégie de Biomemory est de recréer l’équivalent d’un disque dur de cette manière, et elle appelle cette technologie brevetée DNA Drive. Elle est la seule entreprise au monde à la maîtriser. DNA Drive permet de créer un « disque » de n’importe quelle taille, jusqu’à 1000 zettaoctets si nécessaire. Il est facile à copier, produit très peu d'erreurs et peut intégrer tout type de fichier. Biomemory développe aussi un système de fichier optimisé pour l'ADN, qui permettra la compression et l'accès direct. Une première version de ce disque ADN, de la taille d’un grain de sel, a été produite il y a quelques années. Elle est déposée aux archives nationales et contient 100 milliards de copies de la déclaration des droits de l'Homme. L’objectif final de Biomemory est que sa technologie fonctionne de façon transparente avec les datacenters, sans qu’ils aient à être adaptés. Tout se passe à l’intérieur du système de stockage. L’entreprise développe également des supports de stockage amovibles qu��elle appelle DNA Cards, et permettent de passer d'un serveur à l'autre, d’être placées dans un coffre-fort, etc. La différence la plus notable par rapport aux technologies actuelles est que l’écriture sur le disque est impossible sans nouveau matériau. Le système fonctionne donc plus comme une imprimante, pour laquelle Biomemory doit fournir l’équivalent de cartouches d’encre. Ce stockage biologique, avec une faible empreinte environnementale et 100% fabriqué en France est, selon Biomemory, le futur du stockage en datacenter. La technologie reste encore proche du laboratoire, mais l’entreprise estime qu’elle pourra remplacer les bandes magnétiques pour certains usages d’ici 2030. 


Sur le marché concurrentiel d'aujourd'hui, votre entreprise doit faire des investissements technologiques stratégiques pour garantir l'efficacité, la sécurité et la compétitivité à long terme. De nombreuses entreprises négligent d'investir dans du nouveau matériel, tel que des serveurs, par crainte du coût initial. Pourtant, le fait de ne pas mettre à niveau et de ne pas entretenir les serveurs peut souvent entraîner des coûts d'investissement et d'exploitation plus élevés. Cela signifie que les dépenses quotidiennes associées à un matériel obsolète peuvent rapidement dépasser les temps d'arrêt à court terme ou l'investissement initial nécessaire pour un nouveau système.

Les entreprises qui passent à de nouveaux systèmes ont plus de chances d'accéder à des technologies telles que l'intelligence artificielle (IA) et l'analyse des données massives (BDA), qui peuvent leur permettre de prendre des décisions plus rapidement, d'acquérir de nouvelles connaissances et d'augmenter leur productivité comme jamais auparavant. Cela signifie que les organisations qui continuent d'exploiter des systèmes vieillissants risquent de prendre du retard sur leurs concurrents en matière d'innovation, ainsi que de s'aliéner des clients et des consommateurs qui exigent des expériences modernes et personnalisées.

Grâce au nombre élevé de cœurs et à l'évolutivité des serveurs basés sur les processeurs AMD EPYC™ de 3e génération' les entreprises peuvent facilement héberger des applications telles que l'apprentissage automatique (ML) et les simulations scientifiques, et tirer le meilleur parti des nouvelles capacités en matière d'intelligence artificielle (IA), de calcul haute performance (HPC) et d'analyse de données. 

À une époque où les données sont plus vitales que jamais, et où la sécurité et les performances sont primordiales, les processeurs AMD EPYC™ de 3e génération constituent un choix judicieux pour les organisations qui cherchent à mettre à niveau leur matériel obsolète. Non seulement cela peut aider votre entreprise à réduire le coût total de possession (TCO), mais cela peut supercharger la productivité et l'agilité dans un marché en constante évolution. Les calculateurs en ligne d'AMD peuvent aider à comparer de manière compétitive les performances et les coûts de différents processeurs, les économies d'énergie prévues, les émissions de CO2 et bien plus encore. Pour accéder à ces outils et en savoir plus sur nos produits, rendez-vous sur: https://www.amd.com/en/processors/epyc-tools. 

Hannover Messe 2023: Hewlett Packard Enterprise and Aleph Alpha demonstrate generative AI for manufacturing

Hannover Messe 2023: Hewlett Packard Enterprise and Aleph Alpha demonstrate generative AI for manufacturing

— A live demo will demonstrate an AI assistant for an industrial robot that communicates with factory personnel using natural language and images
— The AI assistant was trained with an AI supercomputer by Hewlett Packard Enterprise (HPE) and Aleph Alpha‘s multimodal language model “Luminous”
— Generative AI can significantly increase efficiency and safety in manufacturing – prerequisites for this are aspects such as risk management and the traceability and verifiability of the AI results
— Digital sovereignty: Luminous can be trained and operated on a private HPE infrastructure at the customer’s site

Generative artificial intelligence (AI) is considered to be one of the defining technologies of the next years. What use cases are there in manufacturing, what opportunities and risks must be considered when implementing them?

A live demo shows how factory personnel can communicate with the robot in natural language and with the help of images, for example to clarify questions about installation, maintenance and operational safety. An AI assistant acts like a highly specialized service technician who supports the factory staff in solving very complex tasks.

AI assistant was trained with hundreds of pages of technical documentation

The AI assistant was trained using the industrial robot’s manual, which is several hundred pages long. An AI supercomputer from HPE and Aleph Alpha‘s multimodal language model “Luminous” were used for this.

When communicating with the AI assistant, the factory staff does not have to adhere to any predefined system or use any specific terminology. The AI assistant also responds in natural language. The dialogue with the AI assistant is also possible in several languages, regardless of the language of the manual with which it was trained. A simple example of dialogue would be: “Emergency! How can I stop the robot immediately?” Answer: “Press the emergency stop button. It's the big red button on the top right of the handheld unit.”

In addition, the exchange with the AI assistant can take place via images. Example: When calibrating the robot, an operator takes a picture of a specific calibration mark with a smartphone or tablet and asks if that is the correct calibration position.

Generative AI can increase efficiency and safety – in the factory and beyond

These capabilities can contribute significantly to the efficiency and safety of robot operations. The factory staff is not dependent on the help of a specialized service technician for many detailed questions about installation, maintenance or troubleshooting. That saves time and money. The AI assistant also supports factory personnel in complying with safety regulations – for example, by having an operator photograph the robot’s standing position and asking whether this position is safe. In the event of acute problems, the AI assistant can provide crucial information to prevent damage or production downtime.

The live demo at the Hanover Fair shows only a small part of the possibilities that generative AI opens up for manufacturing. The capabilities of the AI assistant can be extended to the entire production environment of a factory and also to the supply chain – for example by being trained with further technical documentation as well as with information on suppliers, supply agreements, legal terms and regulations, costs, or CO2 emitters. Generative AI is thus becoming a strategic tool to reduce costs, minimize risks and improve sustainability along the entire supply chain.

Success factors: risk management, digital sovereignty, traceability and verifiability

The current hype about generative AI has sparked an intense public debate about the risks of this technology – especially with regards to fake content and digital sovereignty. At Hannover Messe, HPE provides information on how companies can get these risks under control. This includes basic requirements such as a certain level of expertise in regards to data value creation and artificial intelligence, as well as the integration of AI assistants in operational security and risk management processes. Running Aleph Alpha’s Luminous language model on a local AI infrastructure at the customer’s site helps protect trade secrets and avoid cloud dependencies. And finally, Luminous offers the possibility of verifying the content it generates and tracing it back to the sources used for it. In this way, AI-generated content without suitable, trustworthy sources can be withheld directly. This is a unique feature of Luminous.

How HPE and Aleph Alpha are helping clients leverage generative AI

HPE supports both providers and users to plan, implement and operate generative AI. For example, HPE built the AI supercomputer “alpha ONE”, which Aleph Alpha uses to train and operate its Luminous language model. HPE helps users (corporate customers) to define and plan meaningful use cases, to integrate generative AI into the existing IT and process landscape and to set up and operate a local sovereign AI environment. To this end, HPE, together with Aleph Alpha, develops customized AI applications for customers.

Aleph Alpha is a German AI company and was founded in 2019 with the mission to research and develop AI enabling technology for a new era of strong AI. The team of international scientists, engineers and innovators researches, develops and implements transformative AI such as large AI language and multimodal models and operates the fastest European commercial AI data center. Aleph Alpha’s generative AI solutions can support companies and public institutions in maintaining technological independence, securing data and building trustworthy solutions with high security requirements. For more information, visit: www.aleph-alpha.com

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com

Nimbix Federated: a comprehensive architecture for secure, cloud-managed, multi-site supercomputing as-a-service

High-performance computing (HPC) is a complex ecosystem that is increasingly being democratized by cloud and cloud-like technologies. This increased accessibility enables engineers and scientists to consume highly advanced resources without extensive IT expertise, advancing the digital transformation of the physical world. Now that industry has had a taste of on-demand supercomputing, the hunger for faster, more capable systems is growing. What has become clear is that public hyperscale clouds - while incredibly capable - simply cannot match the performance and raw throughput of the world’s premier supercomputers. For industry to continue to enjoy the benefits of supercomputing, it needs increased access to top-performing supercomputers, which comes with high direct and indirect costs. Besides large capital investments (or lease commitments), users looking to leverage HPC must recruit talent, build practices and processes, and suffer extraordinarily long lead-times due to global supply chain bottlenecks that simply can’t keep up with the increasing demand for advanced computing components and systems. This translates to months - or even years – of delays that severely limit ROI as it is measured by private enterprises. That’s assuming a successful roll-out, which is no guarantee. Even the most expert supercomputer builders and operators suffer setbacks regularly! At its pinnacle, HPC pushes the envelope of physics so severely that failures are a fact of life, both at the deployment stage as well as the ongoing operational stages. Within a few years, it’s time to do it all over again, because the next generation of supercomputers are so much more compelling. Riding this giant hamster wheel is a labor of love for those skilled in the art, but presents serious challenges and major barriers to entry for pragmatic industrial users looking to get ever more complex work done faster. Cloud computing drives a compelling bargain for many use cases, even in the high-performance space. Users have access to as much compute as they can deploy (or afford) at generally competitive on-demand rates. Unfortunately, when measured in terms of cost per job, diminishing returns on scalability mean that the numbers don’t always make good business sense. Additionally, while cloud storage is cheap and plentiful, high-end storage is not. Customers pay dearly for increased input/output operations per second (IOPS) and advanced software features like parallelism. Plus, the availability of high-end cloud storage, compute or networking is far more limited than the general-purpose tiers. This means that users often must wait for resources or settle for fewer than they need. Once again, the supercomputer is worth a good look, despite its complexity and cost. Thankfully, governments and large organizations continue to invest heavily in HPC research, with an eye toward accessibility for the private sector. For example, the European High Performance Computing Joint Undertaking (EuroHPC JU) is focusing on building a world-class supercomputing ecosystem. Elsewhere, including in the United States, large direct investment continues to flow from massive funding bodies to advance scientific research in fields such as climate, healthcare and energy. What’s unique about many of these initiatives is that they require participating centers to open a portion of their capacity to industrial users. In theory, this shouldn’t pose much of a problem, because HPC operators provide users with access to systems day in and day out. In practice, however, there are major challenges, including:
  • Governance around data and geography
  • Skill gaps between industrial users and research users
  • Software licensing
  • Service-level agreements (SLAs) that private businesses expect from providers
  • Proper utilization, accounting and monitoring
  • Billing
  • Compliance with various rules and regulations for industry and government-funded research institutions

There are countless other challenges as well, each of which affects end users and operators alike. In response to this situation, Atos launched the Nimbix Supercomputing Suite in 2022 - a set of flexible, secure high-performance computing (HPC) as-a-service solutions. This as-a-service model for HPC, AI and quantum in the cloud provides access to one of the broadest HPC and supercomputing portfolios - from hardware to bare metal as-a-service to the democratization of advanced computing in the cloud across public and private data centers. It offers elastic, dedicated and federated supercomputing consumption models, but for the purposes of this publication, we will focus on federated only. Nimbix Federated, a key pillar of the Nimbix Supercomputing Suite, helps solve the problems of cost, complexity and performance. For end users, it represents the state-of-the-art in responsive user interfaces, allowing point and click access from any device, any time, on any network. The rich catalog of ready-to-run workflows represents the most popular applications and vendors serving the engineering/simulation, life sciences and data science/AI spaces. These are the same applications users are already accustomed to running on their workstations or small clusters, tuned for maximum efficiency and delivered globally from the HyperHub™ application marketplace. For business users, the JARVICE platform that powers Nimbix Federated delivers granular accounting, cost-controls and project management tools. And finally, for operators, JARVICE allows easy integration into the global Nimbix Federated control plane, which can be located entirely within a specific geography (e.g. the European Union), and accessible ubiquitously from the public cloud. Centers looking to offer capacity via Nimbix Federated control the pricing, resource limits and “shaping,” and enjoy automatic monetization any time users consume their systems. Atos also supports the option of a private federation with its own restricted control plane, for groups of operators looking to further control access (e.g. within specific domains or countries). Operators can choose to deploy Kubernetes on compute nodes for maximum isolation and flexibility, or leverage their existing Slurm and Singularity deployments without any additional software configuration management overhead. JARVICE provides a consistent user experience across virtually any containerized infrastructure and platform. General Architecture and Operations Nimbix Federated was designed from the ground up for multi-cluster, multi-cloud and multi-tenant operations. The underlying platform has powered Nimbix Elastic (“The Nimbix Cloud”) for the past decade, facilitating millions of large-scale jobs for thousands of users in nearly 70 countries. The control plane is a service-oriented architecture (SOA) divided into two main parts: “Upstream”, which is limited to one per deployment, covers functions such as:
  • Web-based user interface
  • Public API
  • Business layer
  • High-level scheduling
  • License-based queuing
  • SSO

“Downstream”, which fronts each compute cluster connected to a control plane, covers functions such as:
  • Cluster-specific scheduling
  • Persistent storage
  • Compute

There is a “1 to many” relationship between the upstream and downstream components of a deployment. All control plane components - including the downstream agent - run as services on Kubernetes. This ensures global deployment capabilities (on any cloud or on-premises infrastructure), as well as fault tolerance and high availability. Operators choosing to interface with existing Slurm clusters simply need the ability to access the login node via SSH (and authorized private keys), using a single “service” user account. Two models are supported: one with a Kubernetes cluster in proximity to Slurm, and the other a “direct-to-Slurm” mode that can be driven directly from the cloud. For the former, operators must expose HTTPS to the upstream control plane, while for the latter, SSH. Similarly, when running compute directly on Kubernetes, an HTTPS port must be exposed to the control plane. Remote Access End users connect to Nimbix Federated over any network that can reach it. The control plane may reside in a central, continental location, while individual compute clusters may be spread across a large geography. For visualization jobs, users connect directly to the downstream(s) hosting them from their browser. The same HTTP(s) port that exposes the downstream API can proxy browser-based remote display sessions securely and with minimal latency. For example, an end-user residing in Finland, connecting to the global Nimbix Federated control plane in central Europe, does not need to proxy via the control plane to access sessions remotely on a provider’s downstream cluster in Finland. Users with access to multiple federated endpoints can further optimize their latency by selecting the cluster geographically nearest to them for compute and storage. The platform user interface provides a simple drop-down menu to change between federated zones. Commercial Software Licensing Nimbix Federated provides the ability to assign license servers to tenants and users. Some applications support on-demand licensing and do not require deployment of persistent license servers. Naturally, for free and opensource software, license servers are not a concern. ISVs are free to provide their own click-through end-user license agreements (EULAs) in the web portal when users run applications from HyperHub™ application marketplace. The platform also supports advanced license-based queuing, with per-project maximums and floors, and fair-share scheduling of license checkouts. Conclusion HPC will continue to be the innovation engine for scientific and industrial breakthroughs. On our journey towards exascale, it is critical to achieve new heights in performance with unrivaled system efficiency, while democratizing access to these valuable HPC resources to wider communities. The Nimbix Federated model offers a “win-win” scenario to both HPC operators and end-users looking for access to solve their complex problems.

Creating an AI/HPC Center of Excellence

Launching a new strategic AI or HPC program can be a daunting prospect, but the opportunities for industry, commerce and government to transform research and innovation make a compelling investment. 

However, these strategic systems can be difficult to deploy at scale, especially if there is a need to share that investment across multiple research domains. Building an AI/HPC Centre of Excellence helps to focus resources and expertise, to accelerate collaboration between different teams, which can maximize the return on this strategic investment, and encourages cross-functional collaboration with a shared resource of people and skills to accelerate knowledge transfer.

DDN’s AI optimized storage is a key component of at-scale AI and HPC systems, as it provides the high-performance storage necessary for AI and HPC workloads. With DDN's storage technology, research teams can achieve the extreme levels of performance required for the largest workloads, while also ensuring data reliability and availability.

Find out why DDN is the most successful vendor at delivering AI and HPC at-scale – meet us at our Teratec Booth #D06 on 31st May-1st June, or join us at this session: 

On Thursday June 1st at 14h45 Dr. James Coomer, VP Products at DDN, will be presenting on Addressing data challenges of the second wave of AI. 

Altair Article

Altair is establishing itself as a leader in scientific computing through the convergence of digital simulation, HPC and AI. 

High Performance Computing (HPC) plays a critical role in Artificial Intelligence (AI), simulation and digital twins by providing the computing power and infrastructure to perform complex, data-intensive calculations. Altair provides easy access and intuitive, efficient management of HPC resources, whether those resources are in on-premises data centres or in the cloud. Altair is the only innovation partner that can offer simulation, AI, digital twin and HPC, all converged into a single solution, customized to each customer's needs.

In the data centre and in the cloud, Altair's industry-leading HPC tools enable you to orchestrate, visualize, optimize and analyze your most demanding workloads, easily migrate to the cloud and eliminate I/O bottlenecks. Altair's workload management solutions improve productivity, optimize utilization and efficiency while simplifying the management of clusters, clouds and supercomputers - from the largest HPC workloads to millions of small, high-throughput jobs. Altair offers decades of expertise in helping organizations meet their needs for efficient access and management of HPC resources, partnering with industry leaders to help organizations navigate the complex landscape of HPC in the cloud. 

We automate and improve human decision making with High Performance Computing. We create digital twins and intelligent models leveraging the convergence of simulation, AI and HPC. We help you make informed decisions to compete in an increasingly connected world, while creating a greener, more sustainable future. For more information, visit https://altairengineering.fr/

The CCRT, a computing centre serving industry for 20 years

For twenty years now, the CEA and its partners have been sharing increasingly powerful machines to meet industrial or research needs in HPC, data processing and AI! The success of the CCRT is based on a solid, long-term relationship of trust, established on the basis of a multi-year commitment and a dynamic of exchanges (training, technological and scientific seminars). Located within the TGCC, in Essonne, the Research and Technology Computing Centre (https://www-ccrt.cea.fr/) relies on the CEA's expertise in the co-development and administration of large-scale systems to provide a robust and secure production environment, as well as innovative services (storage, remote display, virtualisation) and full user support. Today, the twenty CCRT partners have access to the Topaze machine, installed in 2021 by Atos and delivering more than 10 Pflops thanks to 864 scalar nodes (110,592 AMD Milan cores) and 75 accelerated computing nodes (300 Nvidia A100 GPUs). A storage capacity of 3 PB, provided by DDN, completes the package. The CCRT is also looking to the future with an Atos QLM 30 quantum emulator. Meet the CEA team on the forum stand and make a date for the CCRT's 20th anniversary celebration on 5 December 2023.

Intel is at the forefront of developing new semiconductor technologies and solutions

 Intel is at the forefront of developing new semiconductor technologies and solutions for an increasingly smart and connected world. We work relentlessly to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges with the help of the five superpowers: ubiquitous computing, pervasive connectivity, cloud-to-edge infrastructure, artificial intelligence, and sensing. These technologies profoundly shape how we experience the world by creating the bridge from the analog to the digital age. Together, they combine, amplify, and reinforce one another, and as they become more ubiquitous, they in turn unlock even more powerful new possibilities; innovative Intel solutions continue to drive digital transformation. At Intel, we’re helping our customers create industrial solutions that power smart factories; develop intelligent transportation systems that streamline traffic management; and leverage high-performance computing, data analytics and artificial intelligence to perform advanced research in fields like biochemistry, engineering, astrophysics, energy, and healthcare.