AI under the hood session Integrating Large Language Models
Science & Technology
Introduction
In this AI Under the Hood session, Wade introduces the integration of large language models (LM) in FileMaker to make AI available and help retrieve values from data in databases. He presents the key takeaways of the section, including checking with the database, understanding embedding vectors, semantic search, and the retrieval augmented generation (IAG) server. Wade explains the architecture enhancements made to support the LM integration, such as the creation of an add-on app assistant, the integration of F script steps, and the use of a handle approach to communicate with different LM providers.
Key Takeaways:
- Checking with the database: FileMaker allows users to easily check values in a database using the familiar GPT-like syntax.
- Understanding embedding vectors: Embedding vectors represent text or documents in a continuous vector space. FileMaker provides support for storing and retrieving these vectors in both binary and text formats.
- Semantic search: FileMaker's integration with LM enables semantic search based on the meaning of natural language, allowing for more advanced search capabilities.
- Retrieval augmented generation (IAG): Setting up the IAG server allows users to ask questions and retrieve answers from knowledge documents, eliminating the need to rely on LM for every query.
Keywords:
AI integration, Database retrieval, AI modeling, Embedding vectors, Semantic search, IAG server
FAQ:
Q: Can I use embedding vectors locally without sending data to a service?
A: Yes, embedding vectors can be generated locally using a sentence transformer model. By setting up a local sentence transformer server, users can utilize embeddings without the need for external services.
Q: Is it possible to source data from multiple servers or multiple tables within a server?
A: Yes, FileMaker allows users to specify the tables and servers from which they want to retrieve embedding vectors. Users can access and store vectors from multiple sources.
Q: Can I set up an in-house server to test these features without relying on external LM services?
A: Yes, users can set up their own open-source LM servers locally to test and experiment with these features. It provides more control and allows for experimentation with different models.