Company
home
About VESSL AI
home

2021

December 21, 2021 (v0.11.0)

Project overview

VESSL gives you a better bird's-eye view of your ongoing project with a customizable project summary page. Using Project Overview, you can easily capture an important observation or mark meaningful progress by compiling experiment metrics and visualizations created in VESSL.
Project Overview also serves as a collaborative report for your team where you can make notes in markdown format – similar to README.md on Git codebase. You can create your own project overview under Project. Refer to our docs for more details and a sample document.

 Improvements & fixes

Improved our Python SDK to support .wav file logging using vessl.Audio. See our docs for more details.
Optimized the execution of agent querying and logging, bringing noticeable speed improvements across the platform.
Fixed a problem where Docker Image pull would fail occasionally.

 TensorBoard integration

You can now record the metrics and media logged to TensorBoard to VESSL. All you have to do is import our Python SDK and vessl.init(tensorboard=True) when you initialize your experiment. Check out our docs for more details.

 Training time estimate

You can now get an estimate of the remaining training time by adding vessl.progress() to your code. You can view this information by hovering over the status of a running experiment. Refer to our docs for common use cases.

November 1, 2021 (v0.9.1)

️ Distributed training

VESSL now comes with PyTorch DistributedDataParallel. You can create multi-node distributed training simply by specifying distributed mode and worker count on the Create New Experiment page. Check out our blog and docs for more details, including a guide on CLI commands.

Local experiment tracking

With support for local experiment tracking, VESSL becomes a more powerful central repository. You can begin recording your local experiments to VESSL by first calling vessl.init() and start logging with vessl.log(). Refer to our docs and example for more details.
# Initialize new local experiment vessl.init()
Python
복사
# Train function with local experiment tacking def train(model, device, train_loader, optimizer, epoch, start_epoch): model.train() loss = 0 for batch_idx, (data, label) in enumerate(train_loader): ... # Logging loss metrics to VESSL vessl.log( step=epoch + start_epoch + 1, payload={'loss': loss.item()} )
Python
복사

October 1, 2021 (v0.9.0)

Improved experiment dashboard

Experiment dashboard now comes with tag, filter, and sort options. You can use these three together to create more specified views like "experiments created by Floyd on node_2_GPU_1 with accuracy ≥ 0.95 and epoch = 17".

Improved charts

You can now group multiple runs for a detailed comparison of key metrics, and configure the range and value of the charts. With support for synchronized mouseover metrics and pan mode, it's easier to scroll through these charts as well.

September 1, 2021 (v0.8.0)

Sweep (Automated Model Tuning)

You can now find the best hyperparameters using VESSL’s automated model tuning. This means you no longer have to manually run individual experiments to optimize your models.

 Workspaces

You can now start your own development environment using VESSL Workspace. Configuring a workspace is as easy as clicking a few buttons to specify the cluster and resource of your choice.