Résumé

Today, chatbots have evolved to include artificial intelligence and machine learning, such as Natural Language Understanding (NLU). NLU models are trained and run on remote servers because the resource requirements are large and must be scalable. However, people are increasingly concerned about protecting their data. To be efficient, the current NLU models use the latest technologies, which are increasingly large and resource-intensive. These models must therefore run on powerful servers to function. The solution would therefore be to perform the inference part of the NLU model directly on edge, on the client’s browser. We used a pre-trained TensorFlow.js model, which allows us to embed this model in the client’s browser and run the NLU. The model achieved an accuracy of more than 80%. The primary outcomes of NLU on edge show an effective and possible foundation for further development.

Détails

Actions