Comprehensive coverage

Intel Labs Israel develops artificial intelligence accessible to everyone with powerful hardware

Intel researcher Moshe Weserblat describes how advanced innovation in artificial intelligence, cognition and natural language processing is developed in the company's laboratories, in cooperation with start-up companies and Israeli academia, in the entire range from basic science to Gaudi processors developed in Israel

Moshe Wesarbalt, director of the Natural Language Processing (NLP) and Deep Learning (DL) research group at Intel Labs. Intel PR photo
Moshe Wesarbalt, director of the Natural Language Processing (NLP) and Deep Learning (DL) research group at Intel Labs. Intel PR photo

The field of artificial intelligence is developing rapidly, and a significant part of it is done in Israel. We asked to find out with Moshe and Servlet, a senior researcher at Intel Laboratories in Israel, what are they doing in the laboratories to improve artificial intelligence systems. It turns out that the laboratory works in collaboration with researchers from the industry and even with academia (including the publication of joint articles) to improve language models, to ensure that they are sustainable in terms of energy consumption, and also to improve natural language processing solutions and other studies that extend from basic science to applied science.

What exactly do researchers in Intel's research labs do? What research is being done here in Israel?

Wesarbalt: "Intel Labs works on solutions in the fields of hardware, software, networking and security, as well as innovative computing models and architectures. In Israel, the Intel Labs team focuses on developing cognitive artificial intelligence and natural language processing technologies. The goal is to create solutions that will affect a wide variety of industries.

Experience realistic worlds with the LDM3D model: Intel Labs and Company Blockade Labs Turn text descriptions into 360-degree landscapes, creating realistic RGB-D images

Joint research with hugging face

"One of the significant collaborations of Intel Labs Israel is with Hugging Face, a leading company in the field of artificial intelligence. Together, we were able to significantly improve the performance of AI content creation models when running on Intel hardware. For example, we were able to improve the Starcoder language model by more than 7 times on Intel Xeon processors using innovative optimization techniques. The results of this collaboration are now available to customers, making it possible to run advanced models on Intel platforms in a more efficient and accessible way. see Blog וHis blood. "

"Another field of research focuses on creating fascinating virtual reality experiences with the help of models. Researchers Gabriela Ben Melech and Estella Apallo, in collaboration with the Israeli start-up company Blockade Labs, have developed a technology capable of turning the user's text descriptions into spectacular 360D environments in 3 degrees. Using an advanced artificial intelligence model called LDMXNUMXD, our system can create realistic XNUMXD images from simple text descriptions, thus enabling interactive experiences. Here you can find information about it on github and his blood. " 

"Furthermore, Intel Labs Israel has played a role in promoting artificial intelligence leadership through the development of acceleration solutions such as Intel Gaudi accelerators, which provide improved energy efficiency and faster inference compared to other solutions. We also offer optimized LLMs on Xeon processors, which demonstrate good performance and generation-to-generation acceleration.

The cognitive AI team has improved Gaudi performance, providing 60% more power and 1.4x faster inference compared to H100, and Xeon optimization for up to 10x improved performance.

More information can be found In a Forbes article and a post. "

Could you discuss the importance of Intel partnerships, such as this one with hugging face, in promoting artificial intelligence research?

Wesarbalt: "Intel is cooperating with industry leaders and start-up companies in order to accelerate the development of artificial intelligence technologies. These partnerships allow Intel to apply its know-how in hardware and software.

"Our partnership with Hugging Face is a clear example of how collaboration can promote innovation. Hugging Face's Transformers library has seen massive adoption, including through major cloud service providers such as Azure, AWS, and Google Cloud.

The partnership between Intel Labs and Hugging Face brings innovations in Intel Xeon, Gaudi and GPU hardware, as well as Intel AI software, to the Transformers community through open source integration. This allows developers to work smoothly and efficiently with these technologies."

In partnership with Hugging Face, Intel demonstrated more than 7x speedup in code generation using the StarCoder Large Language Model (LLM) on Intel Xeon processors.
In partnership with Hugging Face, Intel demonstrated more than 7x speedup in code generation using the StarCoder Large Language Model (LLM) on Intel Xeon processors.

Studies in collaboration with the Academy

Could you highlight some of the latest research publications that came out of Intel Labs Israel's collaborations with the Israeli Academy?

Wesarbalt: "Our collaborations with leading Israeli universities have been very fruitful, and have led to a number of notable publications in the fields of artificial intelligence and natural language processing. One of our recent joint works with the Hebrew University is "TangoBERT: Reducing Inference Cost by using Cascaded Architecture", with my colleagues Yonatan Memo, Oren Perag and Prof. Roy Schwartz. This article, presented inAAAI 23, investigates methods to speed up transformer-based models, commonly used in NLP tasks. By offering an optimal architecture, we were able to significantly reduce the cost of inference (running in real time) while maintaining high performance."

"Another interesting article is with our partners at Tel Aviv University"Transformer Language Models without Positional Encodings Still Learn Positional Information", co-authored by Peter Isaac, Adi Habib, Uri Ram, Ofir Peres and Omar Levy, and presented at EMNLP 2022. This work challenges the conventional wisdom regarding the role of existing encoders in transformer models, which have changed the face of NLP in recent years. The findings imply that even without explicit positional encoders, transformer models can still learn positional information, opening up new possibilities for model design and efficiency."

"We were also present at the ICML 23 conference (a leading international conference on machine learning) and we were partners in an article with Prof. Aviv Tamar from the Technion"Learning Control by Iterative Inversion". Prof. Tamar is an expert in reinforcement learning and its connections to learning representations, planning and optimization."

A great model for the Hebrew language

Vesarbalt: "As part of Intel Labs Israel's advanced research in the fields of NLP and artificial intelligence, a team of researchers including Peter Isaac, Daniel Fleischer, and Moshe Barchansky collaborated with the Dicta association, Mapat and the Israel Association for Human Language Technologies in the development Dicta-LM 2.0 - A large and unique language model specially adapted to the processing of the Hebrew language. The model, which is available in an open and free-to-use form, underwent extensive training on billions of words and hundreds of billions of tokens in Hebrew and English using Intel's Gaudi 2 accelerators. Dicta-LM 2.0 is a breakthrough in the capabilities of artificial intelligence in Hebrew and opens up new and diverse possibilities for NLP applications, starting with chatbots, summaries, error correction and ending with advanced translation tools. These are just a few examples of the research that comes out of our collaborations with the Israeli academy.'

The depth and diversity of artificial intelligence is smaller than that of humans

Where do you see the next breakthrough in NLP and deep learning?

Man is allowed (still) over artificial intelligence. Credit: The Science website. The image was prepared using DALEE and is not used as a scientific image
Man is allowed (still) over artificial intelligence. Credit: The Science website. The image was prepared using DALEE and is not used as a scientific image

Weserbalt: "The next breakthrough in AI will include a transition from pure statistical methods to a more cognitive approach, which will bring us closer to understanding and inferring on a human-like level."

"One of the limitations of AI today is its heavy reliance on statistical findings and huge amounts of data. While this approach has led to significant progress, it still falls short in capturing the depth and nuance of human cognition. The next breakthrough will be characterized by AI systems with capabilities for a deeper understanding of the world, similar to human cognition, and will enable the development of explainable and adaptive artificial intelligence.

"This breakthrough may include a combination of symbolic thinking and knowledge representation alongside statistical learning. By integrating built-in knowledge and thinking capabilities into AI models, we can create systems that not only process and produce language, but also understand the underlying concepts, relationships and implications."

"In the field of deep learning, we will continue to see the development of enormous models with tens and hundreds of trillions of parameters. These models will enable the execution and creation of complex tasks, pushing the boundaries of what AI can achieve. However, there will also be an increasing focus on developing smaller, more efficient language models that can compete with the performance of their larger counterparts in specific fields such as medicine, law, and coding. Models like Microsoft's phi-3 and Hugging Face's StarCoder-2 are examples of this trend towards specialized and compact models.

"As we move towards more cognitive and specialized AI, we can expect breakthroughs in areas such as natural language understanding, inference and creation. AI systems will become better at conducting context-dependent conversations, providing personalized recommendations and assisting with complex tasks that require a deeper understanding of the world.”

"The future of AI lies in the development of systems that can think and reason like humans while utilizing the power of statistical learning and large-scale data processing. By combining the strengths of both approaches, we can create an AI that is not only highly capable, but also explainable, adaptable and compatible with human values."

A combination of scientific and applied research

Is the research carried out at Intel Labs scientific or applied?

Wesarbalt: "At Intel Labs, we perform a combination of scientific and applied research, which allows us to advance the basic understanding of AI and NLP while developing practical solutions. Our scientific research does not only focus on immediate solutions, but also on long-term issues such as modeling climate change and understanding causal relationships. for example, in research Recently carried out by Raanan Yehezkel, Yaniv Gurevich and Shami Nisimov, a method was developed to interpret the meaning of the attention mechanism in advanced machine learning models. Our method is able to discover causal relationships in the data automatically, without the need for many examples. We demonstrated the effectiveness of the approach in tasks such as sentiment analysis and recommendation systems, where it provided clearer explanations of how the models reach their conclusions.

"Scientific research like this helps us deepen our understanding of the underlying principles and mechanisms of AI and NLP, laying the foundations for future breakthroughs and innovations."

“On the other hand, our applied research focuses on more immediate and practical applications, often driven by the needs of our customers and partners. For example, researchers Guy Budoch, Ofir Tzafir and Igor Margolis  Developed a chatbot which can run on PCs using Microsoft's Phi-2 model on the Intel® Core™ Ultra platform. This work, promoted by Hugging Face andPresented at the Microsoft Build developer conference  Demonstrating our efforts to bring AI capabilities to edge devices, making them more accessible and efficient. "

"Talk to your documents and photos on a laptop" demo using the Phi3-LLaVA model

"Another example of our applied research is SetFitABSA By Ronen Lapardon, a facet-based sentiment analysis framework developed in collaboration with Hugging Face. This framework enables detailed sentiment analysis with only a small number of labeled samples, and demonstrates optimal performance on Intel CPUs that outperforms larger flagship models.

"We also have projects like NeuroPrompts By Shahar Rosenman, applying reinforcement learning to large language models for automated instruction engineering. This study, received byEACL'24 and appeared in the leading article of IEEE Spectrum, is designed to improve the instructions given by humans to generative AI systems, and to improve the accuracy and relevance of the results produced."

Through scientific and applied research, we push the boundaries of AI and NLP and develop practical solutions for real-world use. This approach allows us to lead AI research and bring value to our customers and partners."

Challenges in translating academic research

What are the challenges in translating academic research into solutions AI Ready for production at Intel?

Wesarbalt: "One of the main challenges is bridging the knowledge gap between the academic research stage and the practical development stage within a company. “In academia, researchers often focus on proof of feasibility or working with limited data, while production environments require robust solutions that can handle real-world data, which is often noisy and unpredictable.

Another challenge is the complexity of implementing these technologies in practice. Artificial intelligence systems developed in research can be very sophisticated, but to adapt and operate them in production environments requires a lot of technical expertise. The transition from a research idea to a practical solution in the field is challenging, because you need to understand well the complex research and also the requirements of the environment where you want to implement the solution. The differences between the academic world and the field make it difficult to turn the research into a practical solution that works smoothly.

"Lack of data is also a significant challenge. Artificial intelligence systems rely heavily on large amounts of quality data for training. However, when deploying these solutions in production, there may be a mismatch between the training data and the data encountered in the real world system. This gap can result in significant output errors and biased decision-making, which can have serious consequences in real-world applications.”

Safe artificial intelligence

"Finally, Intel is committed to the development and responsible use of artificial intelligence at all stages of research and production. To this end, we have established strict inspection procedures, invested in research and collaborations. In addition, Intel Labs researches topics such as privacy, security, safety, human-AI collaboration, misinformation, sustainability, explainability and transparency. We also collaborate with academic institutions around the world to promote ethical and user-centered development of artificial intelligence.” Wesarbalt explains.

A huge array of climate data

Can you provide more details about the data set ClimateSet Developed by Intel Labs Israel in collaboration with the Mila Quebec Institute for Artificial Intelligence and the University of Montreal?

Weserbelt: "Certainly, ClimateSet is a large-scale dataset of climate models developed in collaboration with the Quebec Institute for Artificial Intelligence (Mila) and the University of Montreal. The primary goal of this dataset is to enable researchers and machine learning practitioners to rapidly predict new climate change scenarios and build climate-focused applications. The leading researcher of Intel Labs Israel is Yaniv Gurvitz."

“The development of ClimateSet was driven by the need for diverse and high-quality data to train machine learning models for climate change research and applications. By providing a comprehensive and well-curated database, we aim to accelerate the development of AI solutions that can help us better understand, predict and mitigate the effects of climate change.”

"One of the main features of ClimateSet is the inclusion of a wide range of climate variables, such as temperature, precipitation, sea level and more. These variables are provided at different spatial and temporal resolutions, allowing researchers to model climate change at different scales and granularities. The information is taken from leading climate models and observational datasets, ensuring its reliability and accuracy."

"Another important aspect of ClimateSet is its ability to support rapid projection of new climate change scenarios. By utilizing this data pool, researchers and developers can explore different possible futures and assess the potential impacts of different climate policies and interventions. This enables a more proactive and informed approach to mitigating and adapting to climate change.”

"We believe that ClimateSet will serve as a valuable resource for the deep learning community, empowering researchers and entrepreneurs to create applications that can contribute to the fight against climate change."

The future of artificial intelligence technology

How is Intel's progress inNLP And deep learning shapes the future of technology AI?

Weserbelt: "Intel's advances in NLP and deep learning are shaping the future of AI technology in a number of key ways."

"On the hardware front, Intel develops dedicated processors for artificial intelligence, such as Intel Gaudi and NPU. These processors provide high performance and improved energy efficiency specifically tailored for natural language processing tasks. By offering specialized hardware solutions, Intel enables businesses and researchers to handle more complex NLP workloads while optimizing computing resources.”

"In addition to advancing the hardware, Intel is investing in the development of software solutions adapted to its AI hardware. for example,  Intel® oneAPI  is a unified programming model that allows developers to fully utilize the potential of Intel's AI hardware in a variety of architectures."

"In terms of software, Intel also develops open and free software libraries such as  Intel® OpenVINO, which provide developers with easy-to-use tools for common NLP tasks. These libraries allow developers to write, test and deploy NLP models easily and quickly, and lower the entry level for AI development and accelerate the adoption of NLP technologies."

"Furthermore, Intel Labs Israel proactively contributes to the artificial intelligence community by developing research libraries as well as efficient free models. for example fastRAG  and-setfit, Two libraries developed by Intel Labs Israel have accumulated more than 1.5 million downloads in the past year, demonstrating their popularity and usefulness in the research and development community."

Intel Labs Israel released NeuralChat, an innovative model that achieved the highest rating among the open language models. The model received hundreds of reviews and views.

"Intel Labs Israel researcher Shira Goskin also contributed models like Dynamic TinyBERT , which was downloaded about 500 thousand times in March 2024 alone. These models illustrate Intel's commitment to advancing NLP research and providing practical tools for developers and researchers."

"Recently, Intel Labs Israel published an innovative model called NeuralChat 7B, which achieved the highest ranking on the Open LLM Leaderboard. A widely recognized model, with hundreds of reviews and dozens of clips online. NeuralChat illustrates Intel's leadership in developing NLP technologies. The Cognitive AI team has trained several multi-model LLMs based on Google's Gemma LLM model at Gaudi and is publishing them through Hugging Face.”

"The result is the advancement of research in specific NLP fields, such as content creation, document summarization and classification. By providing the research and development community with simple yet powerful tools to implement advanced research and development using Intel hardware, Intel is accelerating the pace of innovation in AI and NLP.”

One response

  1. It is strange that there is no significant AI research in the local academy

    There is a strong feeling that in this decade the contribution of the 'startup country' to the world (and the West in particular) will be less than in all the previous decades, except for entrepreneurs who will head companies abroad.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.

Science website logo