User Manual for the AI Assistants Module for Dolibarr
12 minutos de lectura
This module, which you can purchase from Dolistore, offers you the ability to use many generative AI models directly from your own Dolibarr installation, consuming the minimum through the APIs of different providers, instead of subscribing to fixed monthly payments for each employee in your company.
Index
If you are not an administrative user of your Dolibarr, you can skip to point 3 of this manual.
1. Installation Process
First, as usual for installing any Dolibarr module:
- Go to the Configuration section
- Go to Modules
- Go to the Install an external module tab, and there you upload the ZIP of the module or its update
- Return to the Available modules tab
- Filter by Origin: "External - IMASDEWEB"
- Activate the module
- Click on the Configuration icon
In a first Miscellaneous section, we can configure things like the "Default model". This will be the model with which a chat will be initialized when the system has no information in the user's session variable about the last model they used.
Usage Limit per User
You can configure the usage limit per user, both per hour and per day:
- Every time a user makes an AI request, the system counts how many requests they have made in the last 60 minutes, and also in the last 24 hours. And if either of these values exceeds the corresponding limit, then it displays a message to the user encouraging them to try again later.
- Depending on the size of your organization and the type of use that your users/employees will make or need to make, it will be convenient for you to configure these two parameters differently.
For example, maybe you don't want a user to be able to make very intensive use of AI (generating texts, images, etc.) for a few hours. You can then configure a high hourly limit (for example, one request every 10 seconds would be 6 per minute and 360 per hour. But since you don't want this to encourage excessive use, you could set a limit of only 1400 every 24 hours. This way, you would allow intensive uses of AI, while ensuring some control over prolonged abusive use.
2. Integration with AI Providers
This is the most interesting part and the one that will vary most frequently with each update. Over the coming months, more and more integrations with new providers or new models from the same providers will be added. So, if you want to make the most of it, it would be recommended to update regularly.
To configure each provider:
- Open an account on the provider's platform (the link appears in the panel)
- Generate a unique key to use the API of this provider and place it here
- Indicate which models from this provider you want to:
- ALLOW THEIR USE to users
- DISPLAY to users **
** In some cases, it might be interesting to display a model that is not open for use, but that you are willing to activate if someone needs it. Because there are certainly models that are quite expensive to use normally, but can be very useful to activate at specific times.
Note: in the future, it is planned to implement the possibility of activating/deactivating models PER USER. This way, you will have better control over who consumes each AI service.
OpenAI
- This is the company that created the famous chatGPT, which has a particularity: it has models for EMBEDDINGS, which is a fundamental AI service to develop any semantic search system, that is, a user searches for "felines" and the system finds everything related to cats and other felines, even if the word to search was misspelled.
This embeddings system is the one that will be used for the search engine in the chat history of each user, in the very near future. It's actually a feature that will be implemented as OPTIONAL, but it's really worth it and the consumption of this type of AI embeddings is very cheap, which is why I highly recommend it. It can increase your AI consumption by barely 1%, but what you get in return is radically beneficial.
- The text models have different prices for INPUT (what the user writes, including any document they attach) than for OUTPUT (the model's response).
- Consumption is measured in MILLIONS OF TOKENS. "Tokens" are "a part of a word" and roughly equivalent to one or two human syllables. Thus, a million tokens is equivalent to more than TWO THOUSAND PAGES OF AN AVERAGE BOOK. The use of these AIs is therefore quite economical.
Note: In a conversation with the AI, each new message includes all the previous ones. This allows the AI to "remember" the context, but exponentially increases resource consumption as the conversation progresses, as the AI rereads the entire history with each interaction.
The recommendation is that when you need to change the subject, you open a new conversation.
Perplexity
Perplexity is a fabulous provider of Internet search using AI, which integrates different open source models from different providers on its servers. Specifically, through its API, to date, it offers the Llama 3.1 family of models from Meta to be used in two modalities:
- Simply to "chat", which would work as we are used to chatGPT doing.
That is, the model will respond based on all the information with which it has been trained, which has a specific cut-off date. Beyond the end of its training date, these models do not have access to updated information.
- To "search online", which is quite different: it searches the Internet (using Bing) for interesting information for our question and builds a "good answer".
Obviously, the quality of the answer depends a lot on the quality of the search results.
3. Conversing with an AI
When accessing the AI section from the top menu, we'll see a screen like the following, which is where each user will:
- "converse" with different AI models
- interact with their conversation history
It's a simple two-panel interface:
- Left panel: conversation history
- Right panel: conversation
I don't want to unnecessarily lengthen this guide, I think almost all functions and buttons are quite obvious, so I'll only comment on the most relevant. If you think something is not answered, please leave a comment at the bottom so I can complement that information here. Thank you!
Graphic Elements
- Grayscale buttons are buttons for functions planned to be implemented soon. They will be activated in successive updates.
- The basic way of working is by pressing the "+" button to start a new conversation, or by clicking on any from the history to continue with it.
- In the right panel, you have a button to hide the left panel.
- In the right panel, you have a button to "maximize" the chat area:
1. hides the left panel
2. hides the top Dolibarr menu bar
3. maximizes your browser window - When writing a new message, you can use the ENTER key to add line breaks, and you can use the TAB key to position the focus on the SEND MESSAGE button. This way you can maintain a conversation without lifting your hand from the keyboard.
Chat Options
When clicking on any three-dot icon (···), a modal window (popup) emerges with:
- the numeric ID of the conversation
- the start date of the conversation
- the AI model active or used in the last message of that chat (remember that in each message of the same chat you can use different models!)
- button to share (coming soon)
- button to change the title in the history (coming soon)
- DELETE button (already working!), which will ask for confirmation before deleting
AI Model Selection
When clicking on the previous "AI Model" button, a second dialog box emerges:
- with the list of models that the administrator has configured as VISIBLE in the admin panel
- models appear disabled that were configured as VISIBLE but NOT USABLE
- I decided to include the price to make end users aware of the (big!) price difference between using one model or another, and encourage responsible use
Note: right now there are only text generation models, but soon there will be image generation models, audio transcription, etc.
Coming Soon
More integrations with AI providers
If you've been keeping up with this technological revolution we're experiencing with "generative AIs", you know that new developments appear every week. So my intention is to add integration with more and more models and model providers. The idea is that from a single panel (from our Dolibarr) we have a solid base from which to benefit from these AIs, in a multi-user environment.
- image generation (with FLUX, StableDifussion and hopefully IDEOGRAM)
- multi-language audio transcription (with OpenAI's Whisper)
- video generation (this is going to explode before the end of 2024!)
- audio generation from text (OpenAI, Azure Cognitive Services?)
Interacting with the Dolibarr database
Additionally, obviously, I'm already looking at how access to data stored in our database could be integrated into these conversations with AI, whether to:
- analyze trends
- summarize or retrieve data on customers, projects, products, etc.
- compare products, orders, etc...
There's no doubt that AI can be very useful for interacting with our own data. I'll be working on it. If you have any useful references from people who already have something advanced in this, even if it's in another ERP, please leave me the reference in the comments!
Contextual AI functions
I would also like to be able to add "quick assistance buttons" in the different listings and sheets of Dolibarr objects, and in text editors, with pre-recorded prompts such as:
- translate to language X
- grammatically correct and improve a draft (email, ticket response, etc.)
- summarize a data table (of a customer's invoices, production, etc.)
Añada su comentario: