AnnouncementsApache SparkCompany BlogEventsFinancial ServicesSpark + AI SummitSummit

A Guide to Financial Services Talks at Spark + AI Summit 2019

The financial services industry is transforming rapidly. Customers today demand more personalized experiences, better returns on their investments and improved protection against fraud. As a result,  every bank, insurance company and institutional investor is turning towards big data and AI to meet these demands and outmaneuver the competition.

Spark + AI Summit is the premier meeting place for organizations keen on building AI applications at scale with leading technologies such as Apache SparkTM. Data scientists and engineers from around the globe will gather in San Francisco April 23-25 to share best practices and practical advice for delivering the benefits of AI in the real-world.

This year’s Summit continues on the path of growth and innovation, especially for financial services organizations, with a full agenda of technical talks from industry leaders, including Neuberger Berman, Nationwide, Capital One, and FINRA, among others. This year’s Summit attendees can also take part in our Financial Services Networking Event to meet with their peers and hear engaging talks from industry thought leaders.

The following is a brief overview of some of our most highly anticipated financial services talks at this year’s Summit.

Financial Services Sessions at Spark + AI Summit

Deploying Enterprise Scale Deep Learning in Actuarial Modeling at Nationwide

The explosive growth of data is turning insurance pricing on its head and opening doors for data teams to mine valuable insights to fuel predictive pricing models. Join Nationwide’s data science and engineering teams as they share their experiences with deep learning, using TensorFlow and PySpark on the Databricks Unified Analytics Platform.

Investing with Access to Tim Cook’s Dashboard

Keynoting the Financial Services Networking Event at Summit is Michael Recce, Chief Data Scientist for Neuberger Berman. In his talk, Michael asks the question, what could you achieve if you were armed with Tim Cook’s private business dashboard? Gaining that kind of edge begins with big data, but it doesn’t stop there; it requires both a fundamental understanding of the business, and of data, statistics, and methods such as machine learning. Join Michael as he explores these concepts and more.

The Pursuit of Happiness: Building a Scalable Pipeline Using Apache Spark and NLP to Measure Customer Service Quality

Modern NLP techniques can be used to determine the general sentiment of a sentence, phrase, or paragraph of text. This talk, led by FIS Global, explores how to expand NLP/sentiment analysis to investigate the intense interactions that can occur among humans and between humans and robots to derive actionable insights that can influence your customers’ satisfaction.

Scoring Loan Risk Analysis in Real Time

Accurately assessing loan application risk can reduce a lender’s exposure to risky assets. Armed with the right tools and models, Data scientists can analyze customer data and make business rules that govern loan approvals. In this session, you’ll learn how to use the Databricks Unified Analytics Platform to perform ad-hoc analyses of loan-risk data, apply machine learning to compare GBT and XGBoost algorithms, and score this data in batch, streaming, and real-time.

Apache Spark and Sights at Speed: Streaming, Feature Management, and Execution

Harnessing continuous data streams can be a significant challenge without the right technologies and practices. The Capital One team will review common data architecture patterns and dig deep into Spark patterns for training, model execution, and streaming- feature- state management.

Quality Assurance for BigData Processes Using Databricks and FINRA’s MegaSparkDiff

Part of the data engineer’s job is to ensure data quality. This is a significant challenge when dealing with massive volumes of real-time data, which FINRA does daily, processing up to 100 billion market events per trading day. This talk highlights how FINRA’s open-source tools, MegaSparkDiff, along with Databricks notebooks, help address quality assurance issues in big data processing.

What’s Next

Read All Our Guides to Spark + AI Summit

Databricks Blog

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *