Need Help?
Learn more about the Dell Enterprise Hub
Summary
What is the Dell Enterprise Hub?
The Dell Enterprise Hub is an online portal making it easy to train and deploy the latest open AI models on-premise using Dell platforms, and securely build Generative AI applications. The Dell Enterprise Hub is the result of a deep engineering collaboration between Dell Technologies and Hugging Face, it includes:
- Optimized Containers: Ready-to-use containers and scripts for model deployment and fine-tuning, optimized for each base model and Dell hardware configuration.
- Dell Platforms native support: Dell customers easily filter models by the Dell Platforms they have access to, and access tested and maintained configurations for training and inference.
- Bring Your Own Model: Dell Enterprise Hub supports importing compatible fine-tuned models into the optimized containers.
- Enterprise Security: Leverage Hugging Face Enterprise Hub advanced security features including Single Sign-On, fine-grained access controls, audit logs and malware scanning, along with the enterprise-curated models offered in Dell Enterprise Hub.
The Dell Enterprise hub provides a secure, streamlined experience for Dell customers to build Generative AI applications in confidence, taking full advantage of the computing power of the Dell Platforms at their disposal.
How do I deploy a Model?
Deploying a model on a Dell Platform is a simple 4 steps process:
- Choose Your Model: Select a model from the Dell Model Catalog. Currently, supported models for direct deployment are:
- Configure Inference Settings: In the Model Card, click Deploy, then select a compatible Dell Platform and the desired configuration.
- Run Deployment Command: Copy the generated command, and run it inside the Dell infrastructure.
- Test Your Model: Once the container is set up and endpoints are up and running, test your model with the provided sample code snippets.
If you want to deploy a fine-tuned model instead of the curated models above, refer to How can I deploy a fine-tuned model?
The Dell Enterprise Hub inference containers leverage Hugging Face ML production technologies, including Text Generation Inference for Large Language Models. The predefined configurations provided can be easily adjusted to fit your needs, by changing the default values for:
NUM_SHARD
: The number of shards, or tensor parallelism, used for the model.MAX_INPUT_LENGTH
: The maximum input length that the model can handle.MAX_TOTAL_TOKENS
: The maximum total tokens the model can generate.MAX_BATCH_PREFILL_TOKENS
: The maximum number of tokens to prefill the batch used for continuous batching.
More information can be found in the Hugging Face Text Generation Inference documentation
How can I fine-tune a Model?
To start training one of the models available in the Dell Model Catalog, please follow the following steps:
- Select Base Model: Start by choosing a trainable model in the Model Catalog. Currently the following models are available for training:
- Configure Training Settings: From the Model Card, click Train, then select the Dell Platform you want to use. Next, set the local path of the CSV training dataset file, and the path to store the fine-tuned model. Learn how to format and prepare your dataset at how should my dataset look. Finally, adjust the training configuration default settings to match your requirements.
- Deploy Training Container: With Dell Enterprise Hub, model training jobs are configured within ready-to-use, optimized training containers. You can run your training job by deploying the container using the provided command, executed within your Dell environment.
- Monitor Training Job: Track the progress of your training job to ensure optimal performance and results.
Training containers leverage Hugging Face autotrain
, a powerful tool that simplifies the process of model training. Hugging Face autotrain
supports a variety of configurations to customize training jobs, including:
lr
: Initial learning rate for the training.epochs
: The number of training epochs.batch_size
: Size of the batches used during training.
More details on these configurations can be found in the Autotrain CLI documentation.
How should my dataset look?
To finetune LLMs your dataset should have a column with the formated training samples. The column used for training is defined through the text-column
argument when starting your training, below it would be text
.
Example Format:
text
human: hello \n bot: hi nice to meet you
human: how are you \n bot: I am fine
human: What is your name? \n bot: My name is Mary
human: Which is the best programming language? \n bot: Python
You can use both CSV and JSONL files. For more details, refer to the original documentation.
How can I deploy a fine-tuned model?
To deploy a fine-tuned model on your Dell Platform, you can use the special "Bring Your Own Model" (BYOM) Dell inference container available in the Dell Enterprise Hub. This makes it easy to integrate fine-tuned models seamlessly into your Dell environment.
- Select Base Model: In the Model Catalog, open the Model Card for the base model used for fine-tuning, then click "Deploy Fine-Tuned Model" to access the BYOM feature.
- Configure Inference Settings: Select the Dell Platform you want to use, and the configuration options. Make sure to correctly set the Path to the local directory where your fine-tuned model is stored.
- Run Deployment Command: Copy the generated command, and run it inside your Dell environment.
- Test Your Model: Once the BYOM container is set up and endpoints are up and running, test your model with the provided sample code snippets.
Unlike direct deployment of models provided in the Dell Model Catalog, when you deploy a fine-tuned model, the model is mounted to the BYOM Dell inference container. It's important to make sure that the mounted directory contains the fine-tuned model and the provided path is correct.
Hardware Requirements
Gemma
For models fine-tuned from the Gemma base model, the following hardware configurations are recommended for deployment:
Dell Platforms | Number of Shards (GPUs) | Max Input Tokens | Max Total Tokens | Max Batch Prefill Tokens |
---|---|---|---|---|
xe9680-nvidia-h100 | 1 | 4000 | 4096 | 16182 |
xe9680-amd-mi300x | 1 | 4000 | 4096 | 16182 |
xe8640-nvidia-h100 | 1 | 4000 | 4096 | 16182 |
r760xa-nvidia-h100 | 1 | 4000 | 4096 | 16182 |
r760xa-nvidia-l40 | 2 | 4000 | 4096 | 8192 |
r760xa-nvidia-l40 | 4 | 4000 | 4096 | 16182 |
Llama 3.1 8B
For models fine-tuned from the Llama 3.1 8B base model, the following SKUs are suitable:
Dell Platforms | Number of Shards (GPUs) | Max Input Tokens | Max Total Tokens | Max Batch Prefill Tokens |
---|---|---|---|---|
xe9680-nvidia-h100 | 1 | 8000 | 8192 | 32768 |
xe9680-amd-mi300x | 1 | 8000 | 8192 | 32768 |
xe8640-nvidia-h100 | 1 | 8000 | 8192 | 32768 |
r760xa-nvidia-h100 | 1 | 4000 | 4096 | 16182 |
r760xa-nvidia-l40 | 2 | 8000 | 8192 | 16182 |
r760xa-nvidia-l40 | 4 | 8000 | 8192 | 32768 |
Llama 3.1 70B
For models fine-tuned from the Llama 3.1 70B base model, use these configurations for deployment:
Dell Platforms | Number of Shards (GPUs) | Max Input Tokens | Max Total Tokens | Max Batch Prefill Tokens |
---|---|---|---|---|
xe9680-nvidia-h100 | 4 | 8000 | 8192 | 16182 |
xe9680-nvidia-h100 | 8 | 8000 | 8192 | 16182 |
xe9680-amd-mi300x | 4 | 8000 | 8192 | 16182 |
xe9680-amd-mi300x | 8 | 8000 | 8192 | 16182 |
xe8640-nvidia-h100 | 4 | 8000 | 8192 | 8192 |
Mistral 7B
Hardware configurations for models fine-tuned from the Mistral 7B are as follows:
Dell Platforms | Number of Shards (GPUs) | Max Input Tokens | Max Total Tokens | Max Batch Prefill Tokens |
---|---|---|---|---|
xe9680-nvidia-h100 | 1 | 8000 | 8192 | 32768 |
xe9680-amd-mi300x | 1 | 8000 | 8192 | 32768 |
xe8640-nvidia-h100 | 1 | 8000 | 8192 | 32768 |
r760xa-nvidia-h100 | 1 | 4000 | 4096 | 16182 |
r760xa-nvidia-l40 | 2 | 8000 | 8192 | 16182 |
r760xa-nvidia-l40 | 4 | 8000 | 8192 | 32768 |
Mixtral 8x7B
For models fine-tuned from the Mixtral base model, the deployment configurations are:
Dell Platforms | Number of Shards (GPUs) | Max Input Tokens | Max Total Tokens | Max Batch Prefill Tokens |
---|---|---|---|---|
xe9680-nvidia-h100 | 4 | 8000 | 8192 | 16182 |
xe9680-nvidia-h100 | 8 | 8000 | 8192 | 16182 |
xe9680-amd-mi300x | 4 | 8000 | 8192 | 16182 |
xe9680-amd-mi300x | 8 | 8000 | 8192 | 16182 |
xe8640-nvidia-h100 | 4 | 8000 | 8192 | 8192 |
r760xa-nvidia-h100 | 4 | 8000 | 8192 | 16182 |
Resources
- Hugging Face Enterprise Hub documentation
- Hugging Face Text Generation Inference documentation
- Hugging Face AutoTrain documentation