Intel Innovation 2023 Conference Generative AI Hackathons and Challenges

Sep 21, 2023

The conference took place at the San Jose Convention Center. That proved to be a great venue in my opinion, it had various spaces both for sessions, a big exhibit area, and the keynote section, spaces where people could retreat if they had any work to do. I drove a Plug-In Hybrid and was extremely happy to see that the parking garage offered an ample number of Level 2 charging stations. Here is the most comfortable keynote I’ve ever experienced: a small overflow stage with bean bags:

Throughout the conference, I never had to wait in too long lines (even though I must mention that the scale of the conference is smaller than an Oracle Java One or Google I/O), so the conference organization and management were great. I also noticed how enthusiastic and helpful all the Intel employees were. That also applied to all interactions between them and not just the conference attendees. This gave a very good atmosphere for the whole conference, and I’d consider Intel as a workplace based on what I saw.

I focused my efforts on the “Intel AI Innovation Bridge” Generative AI hackathon, and side challenges such as the Gen AI Rock Star challenge. There were a few categories and I couldn’t decide on the first day which one I’d like to target. On one hand, I wanted to experiment with multi-modal generative AI. Since my wife and I were trying to find a home for a cat in our apartment complex, I was thinking about an app that would boost the animal shelter adoption rate with enhanced pet listings. Google researchers came out with a technique called DreamBooth which can fine-tune a stable diffusion model to use a specific person or animal or any subject with only a few example photos. After the fine-tuning, the model could generate images starring the subjects in any requested scenarios. I thought animal shelters could enrich the set of photographs with cute scenarios for each pet. I thought this could be a great opportunity to port DreamBooth to Intel Max GPUs or Intel Habana Gaudi2 accelerators.

On the other hand, I wanted to explore knowledge chat agents for the startups I work for (SportsBoard and ThruThink). That might not be as cute as a cat or dog, but would be much more connected to my everyday work and could potentially yield applicable results. From ThruThink I could control the output format of our help database and tested HTML, and versions version too. I went with the markdown because I’ve seen semantic chunking with that format. I even contributed to an open source question_extractor project to avoid too heavy hammering of embedding APIs and extended the supported API targets (AnyScale) and Q&A formats (PaLM2, Azure OpenAI, …).

At the end of the first day, I decided to resist the DreamBooth cuteness and go with the valuable business LLM agent route. That left less than half a work day before the project submission deadline on the second day of the conference. So at the end of the day, I studied the example data sets we got for fine-tuning and discovered some discrepancies (for example Questions and Answers were flipped in certain parts of the dataset). The takeaway is: just like with any AI/ML task (or more generally any software executions) everything starts from the data even for generative AI. Same data issues emerge, and often a large portion of the time is spent massaging data. At the time of the conference cnvrg’s LLMaaS (LLM as a service) by Intel was in the testing stage. The LLMaaS service provided a no-code environment to fine-tune and customize agent solutions with ease. Due to the development stage of this novel product, I could use two toy datasets to fine-tune an LLM, and I applied RAG (Retrieval Augmented Generation) with the help of a PineCone vector database, using ChatGPT’s embeddings API. I also got familiar with GRadio and similar front-end solutions and ended up using GRadio for productionalizing the agent. In the end, the judges were satisfied with my work and I won the LLM category.

There was also a Gen AI Rock Star challenge. This was about an hour-long challenge where the participants were tasked with generating a poster or album cover for an imaginary rock band, along with generating the lyrics of an imaginary song for said band. I ad-hoc teamed up with a fellow attendee Ronald Randolph. We were provided with a Gaudi2 accelerated deep learning AWS EC2 instance equipped with a Runway 1.5 text-to-image stable diffusion model, and also a Vicuna 7-based LLM model. One key for the win was to increase the steps of the stable diffusion model from 10 to 50 to conclude with much better images. The EC2 box was inaccessible for a while, and in the meantime, we experimented with some other LLM models for lyrics, such as ChatGPT and PaLM, and the officially accessible Runway Gen 1 stable diffusion 1.5 model. We used the LLM to get the band name (Nerdcore Annihilation) and the song name (Xeon Fury) as well. The stable diffusion model couldn’t feature text on the image precisely, so we used Runway’s web-accessible edit feature to add an old-school Intel Inside logo and modify the center image of the front cover. The back cover was purely generated by the Gaudi2 instance (notice how the wafer is a pastry wafer and not a chip wafer, we left that in as a pun). We adjusted the prompt many times, prompt engineering is extremely important. I used my geek knowledge to provide desired keywords for the lyrics such as MMX, AVX, AVX2, AVX512, AMX, Xeon Phi, Gaudi 2, Optimum Habana, 288-core Xeon (freshly announced). We mixed the best parts of the various tries to create a hilarious final set of lyrics.

The conference was a huge blast and I hope to be back in 2024! Of course, it’s even sweeter when I have some wins under my belt. It was amazing to see what other teams accomplished as well. For example, the three gentlemen in the center of the photo are working on an AI solution that predicts if a patient will fall in a hospital. They are doing that by analyzing videos. Fall accidents in hospitals can result in serious injuries such as hip fractures, which would put an additional unwanted toll on already sick people. I could tell that those fellow engineers had a very good chance to have a great run with that extremely useful application. Their real wins were not the 14900K CPUs, but rather the startup-specific help they’ll get from Intel to succeed.

Comments loading...