Make HTAP Real with TiFlash A TiDB native Columnar Extension About - - PowerPoint PPT Presentation

make htap real with tiflash
SMART_READER_LITE
LIVE PREVIEW

Make HTAP Real with TiFlash A TiDB native Columnar Extension About - - PowerPoint PPT Presentation

Make HTAP Real with TiFlash A TiDB native Columnar Extension About me Liu Cong, Technical Director, Analytical Product@PingCAP Previously Principal Enginer@QiniuCloud Technical Director@Kingsoft Focus on


slide-1
SLIDE 1

Make HTAP Real with TiFlash

A TiDB native Columnar Extension

slide-2
SLIDE 2

About me

  • Liu Cong, 刘聪
  • Technical Director, Analytical Product@PingCAP
  • Previously

○ Principal Enginer@QiniuCloud ○ Technical Director@Kingsoft

  • Focus on distributed system and database engine
slide-3
SLIDE 3

OLTP

ETL

Traditional data platform relies on complex architecture moving data around via ETL. This introduces maintenance cost and delay of data arrival in data warehouse. OLTP OLTP NoSQL Hadoop Data Lake Analytical Database Big Data Compute Engine Data Warehouse

Traditional Data Platform

slide-4
SLIDE 4

Fundamental Conflicts

  • Large / batch process vs point / short access

○ Row format for OLTP ○ Columnar format for OLAP

  • Workload Interference

○ A single large analytical query might cause disaster for your OLTP workload

slide-5
SLIDE 5

A Popular Solution

  • Use different types of databases

○ For live and fast data, use an OLTP specialized database ○ For historical data, use Hadoop / analytical database

  • Offload data via the ETL process into your Hadoop cluster
  • r analytical database

○ Maybe once per day

slide-6
SLIDE 6

Good enough, really?

slide-7
SLIDE 7

TiFlash Extension

slide-8
SLIDE 8

What’s TiFlash Extension

  • TiFlash is an extended analytical engine for TiDB
  • Powered by columnar storage and vectorized compute engine
  • Tightly integrated with TiDB
  • Clear isolation of workload not impacting OLTP
  • Partially based on ClickHouse with tons of modifications
  • Speed up read for both TiSpark and TiDB
slide-9
SLIDE 9

Architecture

Spark Cluster TiDB TiDB

Region 1

TiKV Node 1

Store 1 Region 2 Region 3 Region 4 Region 2

TiKV Node 3

Store 3 Region 3 Region 4 Region 1 Region 4

TiKV Node 2

Store 2 Region 3 Region 2 Region 1

TiFlash Node 1 TiFlash Node 2

TiFlash Extension Cluster TiKV Cluster TiSpark Worker TiSpark Worker

slide-10
SLIDE 10

Columnstore vs Rowstore

  • Columnar Storage stores data in columns instead of rows

○ Suitable for analytical workload ■ Possible for column pruning ○ Compression made possible and further IO reduction ■ ⅕ of average storage requirement ○ Bad small random IO ■ Which is the typical workload for OLTP

  • Rowstore is the classic format for databases

○ Researched and optimized for OLTP scenario for decades ○ Cumbersome in analytical use cases

slide-11
SLIDE 11

Columnstore vs Rowstore

id name age 0962 Jane 30 7658 John 45 3589 Jim 20 5523 Susan 52

Rowstore

id 0962 7658 3589 5523 name Jane John Jim Susan age 30 45 20 52

Columnstore

SELECT avg(age) FROM employee;

Usually you don’t read all columns in a table performing analytics. In columnstore, you avoid unnecessary IO while you have to read them all in rowstore.

slide-12
SLIDE 12

Raft Learner

TiFlash synchronizes data in columnstore via Raft Learner

  • Strong consistency on read enabled by the Raft protocol
  • Introduce almost zero overhead for the OLTP workload

○ Except the network overhead for sending extra replicas ○ Slight overhead on read (check Raft index for each region in 96 MB by default) ○ Possible for multiple learners to speed up hot data read

slide-13
SLIDE 13

Raft Learner

Region A Region A Region A TiKV TiKV TiKV TiFlash R e g i

  • n

A Instead of connecting as a Raft Follower, regions in TiFlash act as Raft Learner. When data is written, Raft leader does not wait for learner to finish writing. Therefore, TiFlash introduces almost no

  • verhead replicating data.
slide-14
SLIDE 14

Raft Learner

4 3 Raft Leader Raft Learner When being read, Raft Learner sends request to check the Raft log index with Leader to see if its data is up-to-date.

slide-15
SLIDE 15

Raft Learner

4 4 Raft Leader Raft Learner After data catches up via Raft log, Learner serves the read request then.

slide-16
SLIDE 16

TiFlash is beyond columnar format

slide-17
SLIDE 17

Scalability

  • An HTAP database needs to store huge amount of data
  • Scalability is very important
  • TiDB relies on multi-raft for scalability

○ One command to add / remove node ○ Scaling is fully automatic ○ Smooth and painless data rebalance

  • TiFlash adopts the same design
slide-18
SLIDE 18

Isolation

  • Perfect resource isolation
  • Data rebalance based on the “label” mechanism

○ Dedicated nodes for TiFlash / Columnstore ○ TiFlash nodes have their own AP label ○ Rebalance between AP label nodes

  • Computation Isolation is possible by nature

○ Use a different set of compute nodes ○ Read only from nodes with AP label

slide-19
SLIDE 19

Isolation

Region 1

TiKV Node 1

Store 1 Region 2 Region 3 Region 4 Region 2

TiKV Node 3

Store 3 Region 3 Region 4 Region 1 Region 4

TiKV Node 2

Store 2 Region 3 Region 2 Region 1

TiFlash Node 1 TiFlash Node 2

TiFlash Extension Cluster TiKV Cluster Label: AP Label: TP TiDB / TiSpark

Peer 1 Label: TP Peer 2 Label: TP Peer 3 Label: TP Peer 4 Label: AP

Region 1 AP label constrained

slide-20
SLIDE 20

Integration

  • TiFlash Tightly Integrated with TiDB / TiSpark

○ TiDB / TiSpark might choose to read from either side ■ Based on cost ○ When reading TiFlash replica failed, read TiKV replica transparently ○ Join data from both sides in a single query

slide-21
SLIDE 21

Integration

Region 1

TiKV Node 1

Store 1 Region 2 Region 3 Region 4 Region 2

TiKV Node 3

Store 3 Region 3 Region 4 Region 1 Region 4

TiKV Node 2

Store 2 Region 3 Region 2 Region 1

TiFlash Node 1 TiFlash Node 2

TiFlash Extension Cluster TiKV Cluster TiDB / TiSpark SELECT AVG(s.price) FROM product p, sales s WHERE p.pid = s.pid AND p.batch_id = ‘B1328’; Index Scan(batch_id = B1328) TableScan(price, pid)

slide-22
SLIDE 22

MPP Support

  • TiFlash nodes form a MPP cluster by themselves
  • Full computation support on MPP layer

○ Speed up TiDB since it is not MPP design ○ Speed up TiSpark by avoiding writing disk during shuffle

slide-23
SLIDE 23

MPP Support

TiFlash Node 1

MPP Worker TiDB / TiSpark

TiFlash Node 2 TiFlash Node 3

Coordinator MPP Worker MPP Worker Plan Segment TiFlash nodes exchange data and enable complex operators like distributed join.

slide-24
SLIDE 24

Performance

  • Comparable performance against vanilla Spark on Hadoop +

Parquet ○ Benchmarked with Pre-Alpha version of TiFlash + Spark (without MPP support) ○ TPC-H 100

slide-25
SLIDE 25

Performance

Parquet TiFlash

slide-26
SLIDE 26

TiDB Data Platform

slide-27
SLIDE 27

OLTP

ETL

Traditional data platform relies on complex architecture moving data around via ETL. This introduces maintenance cost and delay of data arrival in data warehouse. OLTP OLTP NoSQL Hadoop Data Lake Analytical Database Big Data Compute Engine Data Warehouse

Traditional Data Platform

slide-28
SLIDE 28

OLTP

ETL

Traditional data platform relies on complex architecture moving data around via ETL. This introduces maintenance cost and delay of data arrival in data warehouse. OLTP OLTP NoSQL Hadoop Data Lake Analytical Database Big Data Compute Engine Data Warehouse

TiDB Data Platform

TiDB with TiFlash Extension

slide-29
SLIDE 29

Fundamental Change

  • “What happened yesterday” vs “What’s going on right now”
  • Realtime report for sales campaign and adjust price in no time

○ Risk management with up-to-date info always ○ Very fast paced replenishment based on live data and prediction

slide-30
SLIDE 30

Roadmap

  • Beta / User POC in May, 2019

○ With columnar engine and isolation ready ■ Access only via Spark

  • GA, By the end of 2019

○ Unified coprocessor layer ■ Ready for both TiDB / TiSpark ■ Cost based access path selection ○ Possibly MPP layer done

slide-31
SLIDE 31

Thanks!

Contact us: www.pingcap.com