Difference between revisions of "Deep Learning Workflow"
m (Dkvart17 moved page DeepLearningWorkflow to Deep Learning Workflow without leaving a redirect) |
Ctknight22 (talk | contribs) (Updated Machine learning wiki RAHHH) |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
− | == | + | ==Deep Learning Workflow (2025)== |
− | + | ''Your up-to-date guide for running deep learning assignments on the **Faraday** cluster.'' | |
− | + | --- | |
− | + | ===Contents=== | |
+ | 1. [[#Where to Start (JupyterHub)|Where to Start (JupyterHub)]] | ||
+ | 2. [[#What Python Version to Use|What Python Version to Use]] | ||
+ | 3. [[#Using GPUs for Training|Using GPUs for Training]] | ||
+ | 4. [[#Using Pretrained Models with Keras|Using Pretrained Models with Keras]] | ||
− | |||
− | |||
− | |||
− | |||
− | == | + | --- |
− | + | ||
+ | ==Where to Start (JupyterHub)== | ||
+ | All machine learning assignments should now be run on **Faraday** using its dedicated JupyterHub instance: | ||
+ | https://jupyter.cluster.earlham.edu/hub/login | ||
+ | |||
+ | Log in with your Earlham credentials. This environment: | ||
+ | - Has preloaded TensorFlow, PyTorch | ||
+ | - Gives direct access to Faraday's GPU nodes | ||
+ | - Supports running long experiments and interactive debugging | ||
+ | |||
+ | To confirm GPU access in a notebook: | ||
<pre> | <pre> | ||
− | |||
− | |||
− | |||
import tensorflow as tf | import tensorflow as tf | ||
− | + | print(tf.config.list_physical_devices('GPU')) | |
− | + | </pre> | |
− | + | ||
− | + | --- | |
− | + | ||
− | + | ==What Python Version to Use== | |
− | + | Faraday’s default deep learning environment uses **Python 3.12**, which is fully compatible with the latest machine learning frameworks. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Supported with: | |
+ | - **TensorFlow** | ||
+ | - **PyTorch** | ||
− | + | To list all available modules: | |
+ | <pre>$ module avail</pre> | ||
− | + | Compatibility references: | |
+ | - TensorFlow: https://www.tensorflow.org/install/source#gpu | ||
+ | - PyTorch: https://pytorch.org/get-started/previous-versions/ | ||
− | + | --- | |
+ | ==Using GPUs for Training== | ||
+ | Training on GPUs is essential for modern deep learning tasks. Faraday is equipped with high-performance NVIDIA GPUs hat massively accelerate training. | ||
− | + | To verify GPU access: | |
− | + | <pre> | |
− | + | import tensorflow as tf | |
− | + | print(tf.config.list_physical_devices('GPU')) | |
+ | </pre> | ||
− | + | --- | |
− | + | ==Using Pretrained Models with Keras== | |
− | + | If you’re using **MobileNetV2**, **InceptionV3**, **VGG19**, or **VGG16**, you can easily load them through Keras: | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | : | ||
− | |||
+ | <pre> | ||
+ | from keras.applications import MobileNetV2, InceptionV3, VGG19, VGG16 | ||
+ | </pre> | ||
− | + | Each model can be initialized with pretrained weights (e.g., from ImageNet): | |
− | <pre> | + | <pre> |
− | + | base_model = MobileNetV2( | |
+ | weights='imagenet', | ||
+ | include_top=False, | ||
+ | input_shape=(224, 224, 3) | ||
+ | ) | ||
+ | </pre> | ||
− | + | These models run efficiently on Faraday's GPUs when used with `tensorflow` and `keras`. | |
− | + | Ideal for transfer learning, feature extraction, or fine-tuning workflows. | |
− | |||
− | + | To freeze layers during transfer learning: | |
+ | <pre> | ||
+ | for layer in base_model.layers: | ||
+ | layer.trainable = False | ||
+ | </pre> | ||
− | + | You can then add your own classifier on top using `tf.keras.Sequential` or the functional API. | |
− | + | --- | |
− | |||
− | |||
− | |||
− | |||
− | + | ''Last updated: May 2025'' | |
− |
Latest revision as of 09:43, 5 May 2025
Contents
Deep Learning Workflow (2025)
Your up-to-date guide for running deep learning assignments on the **Faraday** cluster.
---
Contents
1. Where to Start (JupyterHub) 2. What Python Version to Use 3. Using GPUs for Training 4. Using Pretrained Models with Keras
---
Where to Start (JupyterHub)
All machine learning assignments should now be run on **Faraday** using its dedicated JupyterHub instance: https://jupyter.cluster.earlham.edu/hub/login
Log in with your Earlham credentials. This environment: - Has preloaded TensorFlow, PyTorch - Gives direct access to Faraday's GPU nodes - Supports running long experiments and interactive debugging
To confirm GPU access in a notebook:
import tensorflow as tf print(tf.config.list_physical_devices('GPU'))
---
What Python Version to Use
Faraday’s default deep learning environment uses **Python 3.12**, which is fully compatible with the latest machine learning frameworks.
Supported with:
- **TensorFlow**
- **PyTorch**
To list all available modules:
$ module avail
Compatibility references: - TensorFlow: https://www.tensorflow.org/install/source#gpu - PyTorch: https://pytorch.org/get-started/previous-versions/
---
Using GPUs for Training
Training on GPUs is essential for modern deep learning tasks. Faraday is equipped with high-performance NVIDIA GPUs hat massively accelerate training.
To verify GPU access:
import tensorflow as tf print(tf.config.list_physical_devices('GPU'))
---
Using Pretrained Models with Keras
If you’re using **MobileNetV2**, **InceptionV3**, **VGG19**, or **VGG16**, you can easily load them through Keras:
from keras.applications import MobileNetV2, InceptionV3, VGG19, VGG16
Each model can be initialized with pretrained weights (e.g., from ImageNet):
base_model = MobileNetV2( weights='imagenet', include_top=False, input_shape=(224, 224, 3) )
These models run efficiently on Faraday's GPUs when used with `tensorflow` and `keras`. Ideal for transfer learning, feature extraction, or fine-tuning workflows.
To freeze layers during transfer learning:
for layer in base_model.layers: layer.trainable = False
You can then add your own classifier on top using `tf.keras.Sequential` or the functional API.
---
Last updated: May 2025