Chat bots: Difference between revisions
Mr. MacKenty (talk | contribs) No edit summary |
Mr. MacKenty (talk | contribs) No edit summary |
||
Line 24: | Line 24: | ||
* [[Pragmatic analysis]] | * [[Pragmatic analysis]] | ||
* [[Semantic analysis]] | * [[Semantic analysis]] | ||
* Syntactical analysis (parsing) | * [[Syntactical analysis (parsing)]] | ||
* Natural language understanding (NLU) | * [[Natural language understanding (NLU)]] | ||
* Pre-processing | * [[Pre-processing]] | ||
* Recurrent neural network (RNN) | * [[Recurrent neural network (RNN)]] | ||
* Self-attention mechanism | * [[Self-attention mechanism]] | ||
* Synthetic data | * [[Synthetic data]] | ||
* Tensor processing unit (TPU) | * [[Tensor processing unit (TPU)]] | ||
* Transformer neural network (transformer NN) | * [[Transformer neural network (transformer NN)]] | ||
* Vanishing gradient | * [[Vanishing gradient]] | ||
* Weights | * [[Weights]] | ||
Revision as of 08:04, 3 July 2024
- Backpropagation through time (BPTT)
- Bag-of-words
- Biases
- Dataset
- Deep learning
- Graphical processing unit (GPU)
- Hyperparameter tuning
- Large language model (LLM)
- Latency
- Long short-term memory (LSTM)
- Loss function
- Memory cell state
- Natural language processing
- Discourse integration
- Lexical analysis
- Pragmatic analysis
- Semantic analysis
- Syntactical analysis (parsing)
- Natural language understanding (NLU)
- Pre-processing
- Recurrent neural network (RNN)
- Self-attention mechanism
- Synthetic data
- Tensor processing unit (TPU)
- Transformer neural network (transformer NN)
- Vanishing gradient
- Weights