Company
home
About VESSL AI
home

New in VESSL AI

We are constantly improving our product with new features and bug fixes. Join our community Slack and follow us on Twitter or LinkedIn to stay updated. For questions and feedback, feel free to reach out at support@vessl.ai.

July 25, 2023 (v0.18.1)

 Bring your own clouds

You can now bring your AWS and GCP accounts to VESSL. We’ve prepared a Terraform pattern that you can use alongside our vessl cluster command. Refer to our latest documentation and spin up GPU instances from cloud accounts in under 5 minutes.

 Improvements & fixes

Renamed the managed AWS clusters to aws-apne2 and aws-uw2
Added a feature in which a user can leave an organization after re-assigning workloads.

 Community updates

Learn how you can build and deploy a Stable Diffusion app on your GPU clusters with VESSL Run and Streamlit. Follow our guide on our latest blog post.
Deploy computational-intensive models like GenAI and LLMs
Spin-up a GPU-accelerated runtime environment with VESSL Run
Quickly build web-based data applications using Streamlit

June 9, 2023

Next week, VESSL AI will be in Vancouver for CVPR 2023! We are excited to host the official student social event, share our latest product updates, and showcase demos! Here are a few things to know about VESSL AI at the conference.
VESSL AI is hosting the official student social event. Connect with 500+ graduate students, senior faculty, and industry leaders.
Run CVPR 2023 Highlight models and papers like Dreambooth, ImageBind, MobileNeRF, and VisProg with a single command.
We are sharing our latest update on VESSL Run, the easiest way to run open-source models with a single YAML file.
Apply for our free Academic plan to launch GPU-backed training jobs and Jupyter notebooks in seconds.
Also, our team will be also in booth 1527 all week. Stop by our booth to see more of our latest work!

May 23 (v0.18.0)

 Introducing VESSL Run — Unified YAML interface for open-source models

Today, we are releasing VESSL Run, the easiest way to train, fine-tune, and scale open-source AI/ML models. This works by running a YAML-defined model with our single-line command — vessl run -f dreambooth.yaml, for example.
We simplified the complex compute backends and system details required to run models into a unified YAML interface so you can start training without being bogged down in ML-specific peripherals like cloud infrastructures, CUDA configurations, and Python dependencies.
Learn more about VESSL Run on our latest blog post.
With VESSL Run, we are also releasing VESSL Hub, where you can find YAML files to run off-the-shelf models in seconds. Explore more models now at https://vessl.ai/hub.

Feb 24, 2023 (v0.17.5)

 GitLab integration

We’ve completed the integration with Git Trinity. You can now add GitLab integration in addition to GitHub and BitBucket.

 VESSL for Enterprise

We’ve added a new section on our landing page. Take a peek into how our Enterprise customers like Hyundai Motors use VESSL.

Improvements & fixes

Check out our updated Clusters docs.
Experiments in Idle status for over 6 hours will be now automatically terminated.

Feb 3, 2023 (v0.17.4)

 Connect Public Git repos

You can now connect public GitHub, GitLab, and Bitbucket repositories to VESSL. This is as simple as copying & pasting the .git URL of the repository into your experiment launch page. Try out yourself with our public GitHub repo containing some example codes:   VESSL GitHub

 Improved managed Docker images

We are making sure that our managed Docker images stay updated with the latest dependencies. This means you have one less thing to worry about when you set up your runtime & dev environment for your latest model.

 Community update

Nov 25, 2022 (v0.17.4)

Improved on-premise GPU cluster integration

Integrating on-premise GPU clusters is easier than ever with our latest update. All it takes is a single, one-line command. The following command will check and install all the dependencies like Docker, Helm, and Kubernetes and help you connect your machines with a breeze.
curl -sSLf https://install.dev.vssl.ai | sudo bash -s -- --role=controller
Bash
복사
For those who are looking to try out the integration with your laptop, we’ve prepared an even more intuitive command. This allows you to use your personal Linux device as a single node machine — helping you to run workloads more easily with every metadata kept tracked with VESSL.
vessl cluster create --name '[CLUSTER_NAME]' --mode single
Bash
복사
First, explore what you can achieve by integrating your personal laptop. Then expand your integration by running our single-line curl command on your GPU cluster. Refer to our tutorial for more details.

 Community update

VESSL AI is sponsoring a student event at NeurIPS. Join the mentoring event led by senior faculty and industry leaders, and network with the next generation ML professionals from top research group around the world.

Nov 4, 2022 (v0.17.3)

 Updated integrations page

Integrating and managed connected apps are now more intuitive on our updated integrations page. Check out the new page under Settings → Integrations.

Dashboard CSV downloads

You can now download CSV files of your experiment dashboards. Apply filters and sorts, save the view as a dashboard, and download it as CSV.

 Cluster-level permissions

Enterprise customers can now manage cluster-level permissions of multiple organizations in one admin organization.

 Projects & Workspaces search

We’ve added search bars under Projects and Workspaces so you can search through all your projects and notebook servers at once.

🪩 Community update

Check out photos from our preview event of the upcoming NeurIPS 2022, co-hosted with Seoul National University and Yonsei University. Thank you to our speakers and everyone who stopped by.

Sep 30, 2022 (v0.17.2)

 Stronger access control

Secure your ML assets with improved RBAC. Admins can now configure user access control permissions down to each Project and Workspace.

 Improved quota settings

Optimize your on-premise GPU servers and prevent overruns by setting quota limits to GPU hours and disk size, the number of workloads and occupiable GPUs, and more.

 More integrations

We are also expanding our integrations catalog, starting with BitBucket. The integration is available both as cloud and on-premise options.

Improvements & fixes

Added Cluster Quotas page under Clusters → Settings.
Added Access Control page under organization-wide Settings.
Pending and Initializing workload hours will no longer be counted for your quota limit and max runtime.
The default volume size for Workspace /root folder is now limited to 15GB.

Community update

Renaissance: NeurIPS2022 Preview

VESSL AI is holding a preview event of the upcoming NeurIPS 2022 with the top universities in Korea. Join the authors of NeurIPS'22 and network with the top academic and industry research teams in Korea.
Register now at https://vessl.ai/.

Sep 15, 2022 (v0.17.1)

 Workspace backup

You can now download the last backup files of your Workspaces under the Metadata page.
Your backup files are only available for when your Workspace is either Running or Stopped.
Terminating your workspace will remove all your files and folders, including your backup files.
Only the files under your /root folder will be available for backups. We recommend this folder to be kept under 10GB.
Our backups do not guarantee the latest version of your Jupyter Notebook.
We recommend storing your progress periodically on your personal machine.
The download link is only available to the owner of the Workspace.
This feature may be limited or entirely disabled to your organization’s security policy.

 Improvements & fixes

Creating a new Workspace now guides you to the log page.
Fixed a bug where experiment metrics would be logged with wrong data.
Fixed an issue where selecting a CPU-only resource would disable node selector when creating a new workload.

  Community update

Read how VESSL is helping graduate researchers access campus-wide HPCs and run GPU-powered ML workloads faster and easier.

Aug 31, 2022 (v0.17.0)

 Onboarding tutorials

We added an onboarding tutorial on our Home to help our user get the most out of our product. Beginning with a guide on launching new workloads, we are planning to add more in coming weeks.

Improved log formatting

We added text formatting on our Log pages across the product, making warning and error messages easier to spot and read. We also removed some non-essential and duplicate entries. Stay tuned for our upcoming improvements on CLI command outputs as well.

June 20, 2022 (v0.16.0)

Model serving

Model serving is now available for public beta. Serving has been one of the most requested feature from our team and enterprise customers and we are delighted to release it now to the public. Check out our docs for more details.
To serve a model using VESSL, (1) first register the model on VESSL using our Python SDK, (2) specify the deployment specs such as data processing and prediction algorithm, (3) and deploy and test the model using the curl request we provide.

 Improvements & fixes

Renewed organization home — We launched a new organization home page to offer a single place to access the information you need to perform the common ML tasks.
We will be rolling out more widgets through which you can manage integrations and monitor cost forecasts.
Example projects — We added a default example project so you can get started more easily.
Refined sign up process — We are now asking our users to create a personal organization during the sign up process so we can guide so you can get to your first experiment run faster.
Credit-based billing — We added an auto top up in our billing which allows us to automatically deduct the payment for your invoices directly from a saved payment method. This way you don't need to approve the payment manually and your account will be recharged quickly.

May 9, 2022 (v0.15.1)

 New landing page

In addition to our new UI across our platform, we updated our new landing page! The page now features the latest product images with more feature descriptions. Explore our take on the modern workflow for machine learning .

 General availability

VESSL is now available to the general public. You can sign up and try VESSL for free. You can also schedule a 30 minute 1:1 tutorial session with one of our team members. We will be releasing refined guides and example projects in coming weeks.

April 11, 2022 (v0.15.0)

 New home for your ML projects

Home is an overview page for your ongoing projects and activities in you organization. Here, you can find your team’s recent activity and tips and tricks to use VESSL. We hope this brings more visibility to your team.

 Billing

As we are preparing for our open beta, we made payment and invoicing easier. Under settings, you can now add your credit card, complete payment using Stripe, and monitor your team’s usage.

Improvements & fixes

New UI — We are rolling out new UI across the platform. We still have legacy components in place which we hope to update by

March 21, 2022 (v0.14.0)

Experiment dashboard with multiple views

You can now create multiple views of experiment tracking dashboard. This means you can apply filters and sorts to your team’s shared dashboard views without disrupting anyone else, and create multiple dashboards based on these changes.
By default, our experiment dashboard come with two views — a personal one for yourself and one for team collaboration. You can create and share additional views by cloning a dashboard after applying filters and sorts.

Improvements & fixes

Incremental experiment numbers — Experiment names have been simplified to increment numbers without the randomly generated words, and they are now referred to as integers.
All past experiments have been renamed to follow this name format — experiment name 24-shade-tracker in str → experiment number 24 in int, for example.
Experiment numbers should now be passed in as an int type when using CLI commands or Python SDK.
More commands for vessl experiment — New commands like vessl experiment delete have been added to our CLI. Refer to our docs for more details.

February 14, 2022 (v0.13.0)

 Cluster management

VESSL now gives you the ability to monitor all of your infrastructure in one place. This includes an overview of your clusters — both on-premise and cloud — to detailed usage of each node with system metrics, incidents, workloads, and more.

Improvements & fixes

Under Create New Experiment, environment variables has been renamed as Hyperparameters. Our docs have been updated to reflect the change.
Hyperparameters can now be used at runtime by appending them to the start command. See our docs and GitHub for more details including our recommended use cases.

Community update — VESSL for Academics

January 17, 2022 (v0.12.0)

  vessl run

Our efforts to create a CLI-driven, unified development experience continue with vessl run. vessl run runs and logs multiple experiments synchronously in the current cluster. Refer to our docs and manuals for details including our recommended usage.

Model registry

VESSL now comes with a model registry. Using a model registry, you can view the complete lineage and metadata of the model and manage versions or stages of production-ready models all in a central repository. Check out our docs for more details.

 Improvements & fixes

Applied a new UI to model registry — Our plan is to complete a platform-wide UI overhaul by the end of Q1.
Added an option to bind datasets to a project under project settings.
Added an option to set maximum runtime when creating a new workspace.
Added a default 6 hours of maximum idle time for experiments.
Imported release notes to Notion for improved readability.

Community update — What’s next for VESSL AI

We opened our blog with our funding announcement and sharing what’s next for VESSL AI in 2022. Follow us on Medium!

Archive