mirror of
https://github.com/xai-org/grok-1
synced 2024-11-12 20:21:19 +08:00
Update README.md to have context length higher up
In my original summary of the model specifications, I had put the context length near the bottom, but upon thought, it is probably one of the relevant details to end-users, so it should be higher. Also, "Additional Features" should be the final bullet point for editorial reasons.
This commit is contained in:
parent
310e19eee2
commit
4e2e30bd6f
@ -25,6 +25,7 @@ Grok-1 is currently designed with the following specifications:
|
||||
- **Parameters:** 314B
|
||||
- **Architecture:** Mixture of 8 Experts (MoE)
|
||||
- **Experts Utilization:** 2 experts used per token
|
||||
- **Maximum Sequence Length (context):** 8,192 tokens
|
||||
- **Layers:** 64
|
||||
- **Attention Heads:** 48 for queries, 8 for keys/values
|
||||
- **Embedding Size:** 6,144
|
||||
@ -32,7 +33,6 @@ Grok-1 is currently designed with the following specifications:
|
||||
- **Additional Features:**
|
||||
- Rotary embeddings (RoPE)
|
||||
- Supports activation sharding and 8-bit quantization
|
||||
- **Maximum Sequence Length (context):** 8,192 tokens
|
||||
|
||||
# Downloading the weights
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user