The iChime community was joined this past week by not one, but four experts who shared research, experience, and demos with the group.
We kicked off the meeting with a discussion with Dr. Chirag Shah, a professor at the University of Washington. Dr. Shah is an expert in machine learning, artificial intelligence, and information science. His research revolves around intelligent systems and his content is fascinating.
A Look at Today’s AI With Dr. Shah
We started with some basics. There are two categories of models in machine learning – discriminative and generative models. Most of us are familiar with the discriminative model which is easier to understand.
Discriminative machine learning works with classifiers. Using existing data, it asks, “what is the probability that this datapoint is ‘x’?” It then uses data to recognize discriminative attributes, map out patterns, and establish clustering.
The generative learning model works in an opposite way by asking the question “what is the likelihood that ‘x’ would have generated this datapoint?” It can lead to the same outcome, but the approach is reversed.
Today’s AI is still based on these classical forms of learning and uses existing data. However, it is able to move past that and use the data to predict outcomes, fill in the blanks, and create new data points.
An example of this is ChatGPT and AI tools that use the underlying technology of – Large Language Models (LLM). What these technologies do is analyze huge bodies of text and learn patterns within the content. For instance, it learns which words follow which words within a language. For example, “when I see these five words, the sixth word is typically this.” After recognizing all these different patterns within massive amounts of content, it can extrapolate the information to start building sentences and paragraphs that did not previously exist.
Graphic and video AI generation is similar though a little different. Instead of predicting a word, AI predicts a portion of an image or video that is missing. Learning from a database of images, it fills in larger and larger portions until it is creating new images and videos.
These amazing technologies are all based on existing data. However, the outcome is something unique and new.
The Meeting Continued with Live Demonstrations
Patrick Behr demonstrated the step-by-step process of communicating with Chat GPT using their APIs. This included initial steps such as securing an account key and explaining the various parameters and how they affect the responses. He finished his discussion with a live demonstration of interacting with ChatGPT using traditional IBM i SQL programming tools and functions. Here is a good starting point to learn more using these APIs – https://openai.com/blog/introducing-chatgpt-and-whisper-apis.
David Brown continued the meeting, showing a flow chart of a work in progress AI tool for IBM I, intended to aggregate data from disparate sources and perform various analyses. He walked us through four different categories, which are classification, regression predictions, optimization, and abnormalities. This was a very interesting discussion, and we wish David the best of luck as development on this tool continues.
The meeting was completed with Mike Pavlak offering a live demo of PHP code development using GitHub Copilot. Copilot offers autocomplete-style suggestions as you code. It quickly became apparent how this AI tool will save tons of time in application development. To learn more about Copilot visit https://docs.github.com/en/copilot/overview-of-github-copilot/about-github-copilot-for-individuals.
More Of These Formats in The Future
The format of multiple guests on one meeting was very well received by all who attended. We will keep an eye out for topics that lend themselves to this meeting format. A final huge thanks to all of our experts for joining us and sharing with us their time and wonderful knowledge.