How (Not) To Scale Deep Learning in 6 Easy Steps

Posted Leave a commentPosted in Company Blog, Engineering Blog, Machine Learning

Introduction: The Problem Deep learning sometimes seems like sorcery. Its state-of-the-art applications are at times delightful and at times disturbing. The tools that achieve these results are, amazingly, mostly open source, and can work their magic on powerful hardware available to rent by the hour in the cloud. It’s no wonder that companies are eager […]

Productionizing Machine Learning with Delta Lake

Posted Leave a commentPosted in AI, Apache Spark, Company Blog, Data Engineering, Delta Lake, Ecosystem, Education, Engineering Blog, Machine Learning, Platform

Try out this notebook series in Databricks – part 1 (Delta Lake), part 2 (Delta Lake + ML) For many data scientists, the process of building and tuning machine learning models is only a small portion of the work they do every day. The vast majority of their time is spent doing the less-than-glamorous (but […]

Deep Learning on Medical Images at Population Scale: On-Demand Webinar and FAQ Now Available!

Posted Leave a commentPosted in CICD, Company Blog, Customers, Deep Learning, Dicom, Engineering Blog, Events, Genomics, HLI, Human Longevity, Machine Learning, Medical Images, MLflow

On June 26th, we hosted a live webinar — Deep Learning on Medical Images at Population-scale— with members of the data science and engineering teams from Human Longevity Inc (HLI), a leader in medical imaging and genomics. During the webinar, HLI shared how they use MRI images, whole-genome sequencing data, and other clinical data sets […]

Brickster Spotlight: Meet Greg From Intern to Senior Software Engineer

Posted Leave a commentPosted in Company Blog, Culture, Data Science, Hyperopt, Hyperparameter Tuning, Machine Learning, MLflow, MLlib

At Databricks, we’re committed to learning and development at every level, so it’s important to our teams that we recruit and develop our next generation of Databricks leaders. Our interns are encouraged to live out one of our core values, “be an owner” and they play an integral role in developing our platform during their […]

New cost savings option for Azure Databricks with DBU pre-purchase

Posted Leave a commentPosted in Announcements, Company Blog

The rapid adoption of Azure Databricks through our strategic partnership with Microsoft has been remarkable, and it’s proven to be a compelling service for our customers’ big data, analytics and machine learning initiatives. To further help our customers save costs and improve budgeting for Azure Databricks, we are pleased to share a new pricing option […]

Using ML and Azure to improve Customer Lifetime Value: On-Demand Webinar and FAQ Now Available!

Posted Leave a commentPosted in Company Blog, Partners

On July 18th, we hosted a live webinar —Using ML and Azure to improve Customer Lifetime Value – with Rob Saker, Industry Leader Financial Services, Colby Ford,  Associate Faulty – School of Data Science, UNC Charlotte and Navin Albert,  Solutions Marketing Manager at Databricks. This blog has a recording of the webinar and some of the […]

Network performance regressions from TCP SACK vulnerability fixes

Posted Leave a commentPosted in Company Blog, CVE-2019-11477, CVE-2019-11478, CVE-2019-11479, Network performance, Product, Security, TCP SACK vulnerability

On June 17, three vulnerabilities in Linux’s networking stack were published. The most severe one could allow remote attackers to impact the system’s availability. We believe in offering the most secure image available to our customers, so we quickly applied a kernel patch to address the issues. Since the kernel patch was applied, we have […]

Announcing Databricks Runtime 5.5 with Conda (Beta)

Posted Leave a commentPosted in Company Blog, Conda, Databricks Runtime, Ecosystem, Engineering Blog, Machine Learning, Platform, Product

Databricks is pleased to announce the release of Databricks Runtime 5.5 with Conda (Beta).  We introduced Databricks Runtime 5.4 with Conda (Beta), with the goal of making Python library and environment management very easy.  This release includes several important improvements and bug fixes as noted in the latest release notes [Azure|AWS].  We recommend all users […]

How Databricks IAM Credential Passthrough Solves Common Data Authorization Problems

Posted Leave a commentPosted in Company Blog, Security

In our first blog post, we introduced Databricks IAM Credential Passthrough as a secure, convenient way for customers to manage access to their data. In this post, we’ll take a closer look at how passthrough compares to other Identity and Access Management (IAM) systems. If you’re not familiar with passthrough, we suggest reading the first […]