Presto:分布式 SQL 查询引擎

Presto 是什么?

Presto™ (PrestoDB) is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.

Presto™ (PrestoSQL) is a high performance, distributed SQL query engine for big data.

下文将详细介绍二者的区别

基本概念

组件

Coordinator

 负责管理 Worker 和 MetaStore 节点,以及接受客户端查询请求,并进行 SQL 的语法解析(Parser)、执行计划生成与优化(Planner)和查询任务的调度(Scheduler)

Coordinator 通过 RESTful 接口与 Client 和 Worker 交互

Worker

 负责具体的查询计算和数据读写

Discovery Server

 负责发现集群的各个节点,用于节点间心跳监控

一般 Discovery Server 混布在 Coordinator 节点上,也支持单独部署

数据源

Connector

 负责访问不同的数据源,相当于访问数据库的驱动

Catalog

 负责记录 Schema 信息和 DataSource 的引用。Presto 中一个完整的表名通过 <Catalog>.<Schema>.<Table> 组合表示。例如 hive.test_data.test,则表示:

  • Catalog 为 hive
  • Schema 为 test_data
  • Table 为 test

Schema

 一种组织 Table 的方式

Table

 等同于关系型数据库中表的概念

查询模型

Statement

 兼容 ANSI 标准的 SQL 字符串

Query

 当 Presto 解析一条 SQL 语句时,会将其转换为 Query,并创建一个分布式 Query 执行计划

整个查询过程涉及 Stage、Task、Split、Connector 和 DataSource 等组件的协同工作

Stage

 当 Presto 执行查询时,会进一步分为多个 Stage 阶段来执行

Task

 Stage 包含了一系列的 Task,而 Task 才是真正在 Worker 之上被执行的逻辑

Split

 Split 主要是为了拆分大规模数据集,以便 Task 在此之上执行

Driver

 Driver 是一系列运算实例,可以理解为是内存中的一组物理运算符

Task 可以包含一个或者多个并行的 Driver

Operator

 Operator 可以消费(Consume)、转换(Transform)和生产(Produce)数据。例如,一个 Table Scan 从一个 Connector 中 fetch 数据,并生产数据以供给 Operator 消费

Exchange

 Exchanage 负责在 Presto 的节点之间,传输一个 Query 的不同 Stage 的数据。Task 可以生产数据到一个 output 缓存区,也可以通过 Exchange 客户端消费其他 Task 生产的数据

优缺点

优势

  • Ad Hoc(即席查询,秒级或分钟级的查询响应)
  • 比 Hive 快 10x 倍
    • 完全基于内存的并行计算
    • 流水线
    • 本地化计算
    • 动态编译执行计划
    • 内存规划
    • 近似查询(类似于 BlinkDB
    • GC 控制
  • 支持多种数据源(Hive、DruidKafkaMySQL、MongoDB、RedisJMXORC 等)
  • Client 支持多种编程语言(JavaPython、Ruby、PHP、Node.js 等)
  • 支持 JDBC / ODBC 连接
  • 支持 Kerberos 认证
  • 支持查询 LZO 压缩的数据
  • ANSI SQL(窗口函数、Join、聚合、复杂查询等)
Ad Hoc(拉丁短语,英语直译为 for this)即席查询,用户根据实际需求,灵活地选择查询条件,系统生成相应的统计报表。与普通应用查询不同的是,普通应用查询需要通过编程定制开发
即席(jí xí),表示入席、就位、当场等含义
ANSI(American National Standards Institute)美国国家标准学会

劣势

  • 不支持 SQL 的隐式类型转换,而 Hive 支持
  • 不支持容错
    查询请求会分发到多个 Worker 上,当任意一个 Worker 执行失败,Master 会感知到,并认为整个查询请求失败了。并且 Presto 并没有重试机制,所以需要业务端完成重试
Presto 属于强数据类型,并不支持类型的隐式转换,所以无法进行不同数据类型之间的比较,例如 '2' > 1 等。不过,在对应算子中增加新的语义行为即可支持。下文将介绍具体的编码过程

架构

架构图

Presto Architecture

(图片来源:medium.com™)

Presto Architecture with SQL

(图片来源:slideshare.net™)
从架构图可以看出,Presto 采用的是经典的 Master-Slave 模型

SQL 执行流程图

Presto SQL Execute

(图片来源:cnblogs.com™)

Connector 交互图

Presto Connector

(图片来源:slideshare.net™)

交互时序图

sequenceDiagram

participant Client
participant Coordinator
participant Worker
participant Connector
participant Discovery Server

Client ->>+ Coordinator : query
Coordinator ->>+ Worker : choose workers
Worker ->>- Coordinator : return worker list
Coordinator ->>+ Worker : send task
Worker ->>+ Connector : load data
Connector ->>- Worker : return data
Worker ->> Worker : execute task
Worker ->>- Coordinator : return result
Coordinator ->>- Client : return result
loop regularly
  Coordinator -->> Discovery Server : heart beat
  Worker -->> Discovery Server : heart beat
end
其中,与 Discovery Server 的心跳并不参与到查询过程中
集群状态感知的时间间隔为 5s,具体细节详见 io.prestosql.metadata.DiscoveryNodeManager#startPollingNodeStates

内存管理

Presto Memory Management

(使用 WPS™ 绘制而成)

Service Discovery

 Presto 并不是由 Worker 主动发送心跳,而是 Discovery Server 定时探测节点是否存活。内部基于 Airlift 框架来实现服务发现,通过 HTTP 协议进行通讯。在 etc/config.properties 文件中配置的 discovery.uri 参数,会透传给 Airlift 框架。如果,我们将 discovery.uri 参数,设置为 http://127.0.0.1:9999,则可以通过访问 http://127.0.0.1:9999/v1/service 地址,获取到所有注册的服务(包括服务类型、ID、通讯地址、服务所在的节点等信息),具体返回内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
{
"environment": "presto",
"services": [
{
"id": "1f538ad9-e4b0-40a2-88a7-8e901b6d8ce6",
"nodeId": "presto_node_1",
"type": "presto-coordinator",
"pool": "general",
"location": "/presto_node_1",
"properties": {
"http": "http://127.0.0.1:9999",
"http-external": "http://127.0.0.1:9999"
}
},
{
"id": "89a0bf46-a949-4d62-8c65-c6fb9afe1bb2",
"nodeId": "presto_node_1",
"type": "discovery",
"pool": "general",
"location": "/presto_node_1",
"properties": {
"http": "http://127.0.0.1:9999",
"http-external": "http://127.0.0.1:9999"
}
},
{
"id": "253a93c3-ebe9-46cf-a0ee-41e7de56c620",
"nodeId": "presto_node_1",
"type": "presto",
"pool": "general",
"location": "/presto_node_1",
"properties": {
"node_version": "345",
"coordinator": "true",
"http": "http://127.0.0.1:9999",
"http-external": "http://127.0.0.1:9999",
"connectorIds": "system,druid"
}
},
{
"id": "0a9970f3-6717-4979-8f28-7f12c7bd7c75",
"nodeId": "presto_node_1",
"type": "jmx-http",
"pool": "general",
"location": "/presto_node_1",
"properties": {
"http": "http://127.0.0.1:9999",
"http-external": "http://127.0.0.1:9999"
}
}
]
}
上述 IP 地址相关信息已脱敏

 更加具体地来说,是在 HeartbeatFailureDetector 类中启动了一个执行周期为 5s 的定时任务,不断地调用 updateMonitoredServices 方法,来更新集群的服务状态。另外,DiscoveryNodeManager 类中也会启动一个执行周期为 5s 的定时任务,不断地调用 pollWorkers 方法,来检查各个节点的状态。Node 的状态主要分为 active、inactive、shuttingDown 三种,以集合的形式保存在了 AllNodes 类中。后续再选择 Worker 的时候会判断是否存活,并通过 AllNodes#getActiveNodes 方法获取到 active 状态的 Node 集合。另外,我们可以访问 http://localhost:9999/v1/info/state 地址,来检查节点是否处于 active 状态。如果节点存活,则会返回 "ACTIVE" 字符串

MPP

 Presto 采用 MPP(Massively Parallel Processing)大规模并行处理架构,来解决大量数据分析的场景。该架构的主要特征,如下:

  • 任务并行执行
  • 分布式计算
  • Shared Nothing
  • 横向扩展
  • 数据分布式存储(本地化)

SPI

 Presto 采用 SPIService Provider Interface)服务提供发现机制,来插件化地支持多种数据源,以实现联邦查询(Federation Query,指能够通过一条 SQL 查询,对处于完全不同的系统中的不同的数据库和模式,进行引用和使用)

Servlet

graph TD

Load(fa:fa-spinner Load)
Construct(fa:fa-puzzle-piece Construct)
PostConstruct(fa:fa-at PostConstruct)
Init(fa:fa-cog Init)
Service(fa:fa-database Service)
Destroy(fa:fa-times Destroy)
PreDestroy(fa:fa-at PreDestroy)
Unload(fa:fa-bomb Unload)

Load ==> Construct
Construct ==> PostConstruct
PostConstruct ==> Init
Init ==> Service
Service ==> Destroy
Destroy ==> PreDestroy
PreDestroy ==> Unload

style PostConstruct fill:#0099FF
style PreDestroy fill:#0099FF

比对

Presto vs Apache Hive

优势比较

Presto Apache Hive
场景特征 交互式查询 高吞吐
Join 一个较大的事实表 + 多个较小的维度表 事实表 + 事实表
窗口函数 支持 支持
SQL 标准化 ANSI SQL HiveQL

架构比较

Presto vs Hive on Architecture

(图片来源:treasuredata.com™)

Presto vs Amazon Athena

 本质上,Amazon Athena(雅典娜)是一款完全支持标准 SQL 的 Presto

PrestoDB vs PrestoSQL

PrestoDB PrestoSQL
研发主力 Facebook Martin、Dain 和 David
通讯模式 支持 RESTful 和二进制 仅支持 RESTful
查询下推 支持 支持
Connector 数量 25 30
技术输出 博客 博客 + 视频 + 书籍
Slack 渠道
代码质量
Contributor 数量
待解决 Issues 数量
活跃 PR 数量
Commit 数量
比对表持续更新中...
PrestoDB 和 PrestoSQL 均开源于 2012 年,Martin、Dain 和 David 为主力研发
在 2019 年年初,三位主力研发基于 PrestoDB 0.215 版本,创建了 PrestoSQL 这个新项目

部署

这里我们以 PrestoSQL 为例,整体部署步骤与 PrestoDB 相似。不同的是,PrestoSQL 需要较新的 Java11,而 PrestoDB 仍然停留在 Java8

单机版

下载

1
2
3
$ wget https://repo1.maven.org/maven2/io/prestosql/presto-server/345/presto-server-345.tar.gz
$ tar zxvf presto-server-345.tar.gz
$ ln -s presto-server-345 presto

配置

1
2
3
4
5
6
$ cd presto
$ mkdir etc
$ cd etc
$ touch node.properties jvm.config config.properties log.properties
$ mkdir catalog
$ touch catalog/jmx.properties
1
$ vim catalog/jmx.properties
1
connector.name=jmx
1
$ vim config.properties
1
2
3
4
5
6
7
coordinator=true
node-scheduler.include-coordinator=true
query.max-memory=8GB
query.max-memory-per-node=1GB
http-server.http.port=9999
discovery-server.enabled=true
discovery.uri=http://127.0.0.1:9999
1
$ vim jvm.config
1
2
3
4
5
6
7
8
-server
-Xmx8G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:+ExitOnOutOfMemoryError
1
$ vim node.properties
1
2
3
node.environment=presto
node.id=presto_node_1
node.data-dir=/Users/benedictjin/apps/prestoData
1
$ vim launcher
1
2
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-11.0.5.jdk/Contents/Home
export PATH=$JAVA_HOME:$PATH

启动

1
2
$ cd ..
$ bin/launcher start

查看运行状态

1
$ bin/launcher status

查看日志

1
2
3
4
5
6
7
8
# 启动日志
$ tail -f ~/apps/prestoData/var/log/launcher.log

# 运行日志
$ tail -f ~/apps/prestoData/var/log/server.log

# 请求日志
$ tail -f ~/apps/prestoData/var/log/http-request.log

可视化

PrestoSQL Standalone UI

PrestoSQL Query Details

(对 PrestoSQL™ 的截图)

开启 Debug 模式

1
$ vim etc/jvm.config
1
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
1
$ bin/launcher restart
1
2
Stopped 19181
Started as 21861

Docker 容器版

下载

1
$ docker pull prestosql/presto

启动

1
$ docker run -p 8080:8080 --name presto prestosql/presto

客户端连接

1
$ docker exec -it presto presto --catalog tpch --schema sf1

查询

1
presto:sf1> show tables;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  Table
----------
customer
lineitem
nation
orders
part
partsupp
region
supplier
(8 rows)

Query 20200906_033706_00013_ne522, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.28 [8 rows, 158B] [28 rows/s, 558B/s]
1
presto:sf1> select * from customer limit 3;
1
2
3
4
5
6
7
8
9
10
 custkey |        name        |              address               | nationkey |      phone      | acctbal | mktsegment |
---------+--------------------+------------------------------------+-----------+-----------------+---------+------------+------------------------------------------------
75001 | Customer#000075001 | iQyegZCktrxX8jMFs9ip | 3 | 13-826-812-1458 | 6154.12 | FURNITURE | quickly ironic pinto beans up the blithely fluf
75002 | Customer#000075002 | TRzWtXys54mXmbNLlZQ4UR,5VkzA4Ycjsx | 24 | 34-175-692-7923 | 8258.87 | FURNITURE | tes. carefully even requests about the express,
75003 | Customer#000075003 | OVaJQHekQKFzsjqYpkLD | 24 | 34-440-315-8937 | 6870.87 | MACHINERY | even patterns. deposits unwind furiously. furi
(3 rows)

Query 20200906_033714_00014_ne522, FINISHED, 1 node
Splits: 21 total, 19 done (90.48%)
0.25 [11K rows, 0B] [43.6K rows/s, 0B/s]

Kubernetes 集群版

 详见,我的另一篇博客:Helm 实战

实战

Example HTTP Connector

配置 Catalog

1
$ vim etc/catalog/example-http.properties
1
2
connector.name=example-http
metadata-uri=https://raw.githubusercontent.com/prestosql/presto/master/presto-example-http/src/test/resources/example-data/example-metadata.json

重启 Presto

1
$ bin/launcher restart

下载客户端

1
2
$ wget -c https://repo1.maven.org/maven2/io/prestosql/presto-cli/345/presto-cli-345-executable.jar -O bin/presto
$ chmod +x bin/presto

启动客户端

1
$ bin/presto --server localhost:9999

显示 Catalog

1
$ presto> show catalogs;
1
2
3
4
5
6
7
8
9
10
   Catalog
--------------
druid
example-http
system
(3 rows)

Query 20200906_094341_00067_q49be, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.22 [0 rows, 0B] [0 rows/s, 0B/s]

显示 Schema

1
$ presto> show schemas from "example-http";
1
2
3
4
5
6
7
8
9
10
       Schema
--------------------
example
information_schema
tpch
(3 rows)

Query 20200906_094429_00069_q49be, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.22 [3 rows, 44B] [13 rows/s, 204B/s]

切换 Schema

1
$ presto> use "example-http".example;
1
USE

显示所有表

1
presto:example> show tables;
1
2
3
4
5
6
7
8
  Table
---------
numbers
(1 row)

Query 20200906_094445_00073_q49be, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.22 [1 rows, 24B] [4 rows/s, 112B/s]

查询

1
presto:example> select * from numbers;
1
2
3
4
5
6
7
8
9
10
11
12
13
  text  | value
--------+-------
one | 1
two | 2
three | 3
ten | 10
eleven | 11
twelve | 12
(6 rows)

Query 20200906_094451_00074_q49be, FINISHED, 1 node
Splits: 18 total, 18 done (100.00%)
1.69 [6 rows, 0B] [3 rows/s, 0B/s]

Apache Druid Connector

配置 Catalog

1
$ vim etc/catalog/druid.properties
1
2
connector.name=druid
connection-url=jdbc:avatica:remote:url=http://remote_druid_cluster:8082/druid/v2/sql/avatica/

开放 Broker 端口

1
$ kill `ps -ef | grep 8082 | grep -v grep | awk '{print $2}'`; export POD_NAME=$(kubectl get pods --namespace default -l "app=druid,release=`helm list | grep druid- | awk '{print $1}'`" | grep broker | awk '{print $1}') ; nohup kubectl port-forward $POD_NAME 8082:8082 --address 0.0.0.0 2>&1 &

重启 Presto

1
$ bin/launcher restart

下载客户端

1
2
$ wget -c https://repo1.maven.org/maven2/io/prestosql/presto-cli/345/presto-cli-345-executable.jar -O bin/presto
$ chmod +x bin/presto

启动客户端

1
$ bin/presto --catalog druid --server localhost:9999 --schema default

显示 Schema

1
$ presto:default> show schemas;
1
2
3
4
5
6
7
8
9
       Schema
--------------------
druid
information_schema
(2 rows)

Query 20200906_094823_00079_q49be, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.22 [2 rows, 33B] [9 rows/s, 151B/s]

切换数据库

1
$ presto:default> use druid;
1
USE

显示所有表

1
presto:druid> show tables;
1
2
3
4
5
6
7
8
   Table
-----------
wikipedia
(1 row)

Query 20200906_083924_00017_pg9xm, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.29 [1 rows, 24B] [3 rows/s, 82B/s]

查询

1
presto:druid> select * from wikipedia where "cityName" != '' order by "__time" desc limit 3;
1
2
3
4
5
6
7
8
9
10
         __time          |    channel    | cityname |                                                            comment
-------------------------+---------------+----------+--------------------------------------------------------------------------------------------------------------------
2016-06-27 21:00:00.000 | #en.wikipedia | Glasgow | /* Presenters */Dan Neal Is Guest Presenting But Is Not A Main Stand In.
2016-06-27 21:00:00.000 | #en.wikipedia | Bothell | /* BAMN */
2016-06-27 21:00:00.000 | #de.wikipedia | Pamplona | /* Fahrzeuge mit Wankelmotor */ new entries, actual vehicles, easy to check on the web, no reason to delete or blo
(3 rows)

Query 20200906_084953_00028_pg9xm, FINISHED, 1 node
Splits: 18 total, 18 done (100.00%)
0.69 [1.53K rows, 0B] [2.22K rows/s, 0B/s]

表结构

1
presto:druid> describe wikipedia;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
      Column       |     Type     | Extra | Comment
-------------------+--------------+-------+---------
__time | timestamp(3) | |
channel | varchar | |
cityname | varchar | |
comment | varchar | |
count | bigint | |
countryisocode | varchar | |
countryname | varchar | |
diffurl | varchar | |
flags | varchar | |
isanonymous | varchar | |
isminor | varchar | |
isnew | varchar | |
isrobot | varchar | |
isunpatrolled | varchar | |
metrocode | varchar | |
namespace | varchar | |
page | varchar | |
regionisocode | varchar | |
regionname | varchar | |
sum_added | bigint | |
sum_commentlength | bigint | |
sum_deleted | bigint | |
sum_delta | bigint | |
sum_deltabucket | bigint | |
user | varchar | |
(25 rows)

Query 20200906_131047_00010_v7ekz, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0.29 [25 rows, 1.68KB] [87 rows/s, 5.9KB/s]

Explain

1
presto:druid> explain select * from wikipedia where "cityName" != '' order by "__time" desc limit 3;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fragment 0 [SINGLE]
Output layout: [__time, channel, cityname, comment, count, countryisocode, countryname, diffurl, flags, isanonymous, isminor, isnew, isrobot, isunpatrolled, metroco
Output partitioning: SINGLE []
Stage Execution Strategy: UNGROUPED_EXECUTION
Output[__time, channel, cityname, comment, count, countryisocode, countryname, diffurl, flags, isanonymous, isminor, isnew, isrobot, isunpatrolled, metrocode, names
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar, fl
│ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?}
└─ TopN[3 by (__time DESC_NULLS_LAST)]
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar,
└─ LocalExchange[SINGLE] ()
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varch
│ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?}
└─ RemoteSource[1]
Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:va

Fragment 1 [SOURCE]
Output layout: [__time, channel, cityname, comment, count, countryisocode, countryname, diffurl, flags, isanonymous, isminor, isnew, isrobot, isunpatrolled, metroco
Output partitioning: SINGLE []
Stage Execution Strategy: UNGROUPED_EXECUTION
TopNPartial[3 by (__time DESC_NULLS_LAST)]
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar, fl
└─ TableScan[druid:druid.wikipedia druid.druid.wikipedia, grouped = false]
Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar,
Estimates: {rows: ? (?), cpu: ?, memory: 0B, network: 0B}
channel := channel:varchar:VARCHAR
flags := flags:varchar:VARCHAR
regionisocode := regionIsoCode:varchar:VARCHAR
isnew := isNew:varchar:VARCHAR
sum_deleted := sum_deleted:bigint:BIGINT
isunpatrolled := isUnpatrolled:varchar:VARCHAR
isminor := isMinor:varchar:VARCHAR
regionname := regionName:varchar:VARCHAR
isanonymous := isAnonymous:varchar:VARCHAR
sum_added := sum_added:bigint:BIGINT
sum_deltabucket := sum_deltaBucket:bigint:BIGINT
__time := __time:timestamp(3):TIMESTAMP
diffurl := diffUrl:varchar:VARCHAR
isrobot := isRobot:varchar:VARCHAR
metrocode := metroCode:varchar:VARCHAR
cityname := cityName:varchar:VARCHAR
count := count:bigint:BIGINT
countryname := countryName:varchar:VARCHAR
countryisocode := countryIsoCode:varchar:VARCHAR
namespace := namespace:varchar:VARCHAR
comment := comment:varchar:VARCHAR
page := page:varchar:VARCHAR
sum_commentlength := sum_commentLength:bigint:BIGINT
user := user:varchar:VARCHAR
sum_delta := sum_delta:bigint:BIGINT


(1 row)

Query 20200906_113246_00048_pg9xm, FINISHED, 1 node
Splits: 1 total, 1 done (100.00%)
0.34 [0 rows, 0B] [0 rows/s, 0B/s]

Explain Anaylze

1
presto:druid> explain analyze select * from wikipedia where "cityName" != '' order by "__time" desc limit 3;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fragment 1 [SINGLE]
CPU: 3.74ms, Scheduled: 25.39ms, Input: 3 rows (1.18kB); per task: avg.: 3.00 std.dev.: 0.00, Output: 3 rows (1.18kB)
Output layout: [__time, channel, cityname, comment, count, countryisocode, countryname, diffurl, flags, isanonymous, isminor, isnew, isrobot, isunpatrolled, metroco
Output partitioning: SINGLE []
Stage Execution Strategy: UNGROUPED_EXECUTION
TopN[3 by (__time DESC_NULLS_LAST)]
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar, fl
│ CPU: 0.00ns (0.00%), Scheduled: 7.00ms (1.65%), Output: 3 rows (1.18kB)
│ Input avg.: 3.00 rows, Input std.dev.: 0.00%
└─ LocalExchange[SINGLE] ()
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar,
│ Estimates: {rows: ? (?), cpu: ?, memory: ?, network: ?}
│ CPU: 1.00ms (2.78%), Scheduled: 2.00ms (0.47%), Output: 3 rows (1.18kB)
│ Input avg.: 0.19 rows, Input std.dev.: 387.30%
└─ RemoteSource[2]
Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varch
CPU: 0.00ns (0.00%), Scheduled: 0.00ns (0.00%), Output: 3 rows (1.18kB)
Input avg.: 0.19 rows, Input std.dev.: 387.30%

Fragment 2 [SOURCE]
CPU: 35.54ms, Scheduled: 413.99ms, Input: 1533 rows (547.77kB); per task: avg.: 1533.00 std.dev.: 0.00, Output: 3 rows (1.18kB)
Output layout: [__time, channel, cityname, comment, count, countryisocode, countryname, diffurl, flags, isanonymous, isminor, isnew, isrobot, isunpatrolled, metroco
Output partitioning: SINGLE []
Stage Execution Strategy: UNGROUPED_EXECUTION
TopNPartial[3 by (__time DESC_NULLS_LAST)]
│ Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar, fl
│ CPU: 2.00ms (5.56%), Scheduled: 18.00ms (4.26%), Output: 3 rows (1.18kB)
│ Input avg.: 1533.00 rows, Input std.dev.: 0.00%
└─ TableScan[druid:druid.wikipedia druid.druid.wikipedia, grouped = false]
Layout: [__time:timestamp(3), channel:varchar, cityname:varchar, comment:varchar, count:bigint, countryisocode:varchar, countryname:varchar, diffurl:varchar,
Estimates: {rows: ? (?), cpu: ?, memory: 0B, network: 0B}
CPU: 33.00ms (91.67%), Scheduled: 396.00ms (93.62%), Output: 1533 rows (547.77kB)
Input avg.: 1533.00 rows, Input std.dev.: 0.00%
channel := channel:varchar:VARCHAR
flags := flags:varchar:VARCHAR
regionisocode := regionIsoCode:varchar:VARCHAR
isnew := isNew:varchar:VARCHAR
sum_deleted := sum_deleted:bigint:BIGINT
isunpatrolled := isUnpatrolled:varchar:VARCHAR
isminor := isMinor:varchar:VARCHAR
regionname := regionName:varchar:VARCHAR
isanonymous := isAnonymous:varchar:VARCHAR
sum_added := sum_added:bigint:BIGINT
sum_deltabucket := sum_deltaBucket:bigint:BIGINT
__time := __time:timestamp(3):TIMESTAMP
diffurl := diffUrl:varchar:VARCHAR
isrobot := isRobot:varchar:VARCHAR
metrocode := metroCode:varchar:VARCHAR
cityname := cityName:varchar:VARCHAR
count := count:bigint:BIGINT
countryname := countryName:varchar:VARCHAR
countryisocode := countryIsoCode:varchar:VARCHAR
namespace := namespace:varchar:VARCHAR
comment := comment:varchar:VARCHAR
page := page:varchar:VARCHAR
sum_commentlength := sum_commentLength:bigint:BIGINT
user := user:varchar:VARCHAR
sum_delta := sum_delta:bigint:BIGINT


(1 row)

Query 20200906_113431_00049_pg9xm, FINISHED, 1 node
Splits: 35 total, 35 done (100.00%)
5.76 [1.53K rows, 0B] [266 rows/s, 0B/s]

源码分析

鉴于诸多缘由,这里我们基于 PrestoSQL 进行分析

编译

1
$ java -version
1
2
3
java version "11.0.9" 2020-10-20 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.9+7-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.9+7-LTS, mixed mode)
1
2
3
$ git clone --depth 1 --single-branch --branch master https://github.com/prestosql/presto/
$ cd presto
$ mvn clean install -T 1C -DskipTests -pl '!presto-docs'

启动流程

sequenceDiagram

participant PrestoServer
participant Platform
participant Server
participant Logger
participant ImmutableList
participant ImmutableList.Builder
participant Bootstrap
participant ConfigurationLoader

participant Injector
participant LifeCycleManager
participant StaticCatalogStore
participant PluginManager

participant ConfigurationFactory
participant Elements
participant RecordingBinder
participant Module

participant System

participant AbstractConfigurationAwareModule
participant ServerConfig

participant ServerMainModule
participant CoordinatorModule
participant WorkerModule

participant LifeCycleModule
participant ConfigurationModule

participant Guice


PrestoServer ->> PrestoServer : main


opt check the version of java
PrestoServer ->>+ Platform : nullToEmpty
Platform ->>- PrestoServer : java.version
alt java.version < 11
PrestoServer ->> System : exit(100)
end
end


PrestoServer ->>+ Server : new
Server ->>- PrestoServer : Server
PrestoServer ->> Server : start
Server ->> Server : doStart

Server ->>+ ImmutableList : builder
ImmutableList ->>- Server : ImmutableList.Builder


opt init modules
Server ->>+ ServerMainModule : new
ServerMainModule ->>- Server : ServerMainModule
end


Server ->> ImmutableList.Builder : add
Server ->> ImmutableList.Builder : build
ImmutableList.Builder ->> Server : ImmutableList
Server ->> Bootstrap : new
Bootstrap ->> Server : Bootstrap
Bootstrap ->>+ Bootstrap : initialize


opt config
Bootstrap ->>+ ConfigurationLoader : loadPropertiesFrom(configFile)
ConfigurationLoader ->>- Bootstrap : Map requiredProperties
Bootstrap ->>+ ConfigurationLoader : getSystemProperties
ConfigurationLoader ->>+ System : getProperties
System ->>- ConfigurationLoader : Properties
ConfigurationLoader ->>- Bootstrap : Map systemProperties

Bootstrap ->>+ ConfigurationFactory : new
ConfigurationFactory ->>- Bootstrap : ConfigurationFactory
end



opt install module by airlift
ConfigurationFactory ->> ConfigurationFactory : registerConfigurationClasses

ConfigurationFactory ->>+ Elements : getElements(modules)
Elements ->>+ RecordingBinder : new
RecordingBinder ->>- Elements : RecordingBinder
Elements ->> RecordingBinder : install(module)

opt setup modules, for example, ServerMainModule
Elements ->> Module : configure(binder)
Module ->> AbstractConfigurationAwareModule : setup
AbstractConfigurationAwareModule ->> ServerMainModule : setup
ServerMainModule ->>+ AbstractConfigurationAwareModule : buildConfigObject
AbstractConfigurationAwareModule ->>- ServerMainModule : ServerConfig
ServerMainModule ->>+ ServerConfig : isCoordinator
ServerConfig ->>- ServerMainModule : boolean

alt true
ServerMainModule ->>+ CoordinatorModule : new
CoordinatorModule ->>+ ServerMainModule : CoordinatorModule
else false
ServerMainModule ->>+ WorkerModule : new
WorkerModule ->>+ ServerMainModule : WorkerModule
end

ServerMainModule ->> AbstractConfigurationAwareModule : install
end

Elements ->>+ RecordingBinder : binder.elements
RecordingBinder ->>- Elements : List
Elements ->>- ConfigurationFactory : List


ConfigurationFactory ->> ConfigurationFactory : validateRegisteredConfigurationProvider
end




opt system modules
Bootstrap ->>+ LifeCycleModule : new
LifeCycleModule ->>- Bootstrap : LifeCycleModule
Bootstrap ->>+ ConfigurationModule : new
ConfigurationModule ->>- Bootstrap : ConfigurationModule
end

opt create the injector
Bootstrap ->>+ Guice : createInjector
Guice ->>- Bootstrap : Injector
end

opt create the life-cycle manager and start it
Bootstrap ->>+ Injector : getInstance
Injector ->>- Bootstrap : LifeCycleManager
Bootstrap ->> LifeCycleManager : start
end


Bootstrap ->>- Bootstrap : Injector


opt load resources
Server ->>+ Injector : getInstance(PluginManager.class)
Injector ->>- Server : PluginManager
Server ->> PluginManager : loadPlugins

Server ->>+ Injector : getInstance(StaticCatalogStore.class)
Injector ->>- Server : StaticCatalogStore
Server ->> StaticCatalogStore : loadCatalogs
end

Server ->> Logger : "======== SERVER STARTED ========"
点击这里查看完整大图
出于渲染复杂度的考虑,在 Module 初始化阶段省略了 NodeModule、DiscoveryModule、HttpServerModule、JsonModule、JaxrsModule、MBeanModule、PrefixObjectNameGeneratorModule、JmxModule、JmxHttpModule、LogJmxModule、TraceTokenModule、EventModule、JsonEventModule、ServerSecurityModule、AccessControlModule、EventListenerModule、GracefulShutdownModule、WarningCollectorModule;在资源加载阶段省略了 AccessControlManager、PasswordAuthenticatorManager、EventListenerManager、GroupProviderManager、CertificateAuthenticatorManager、Announcer、ServerInfoResource、ResourceGroupManager、SessionPropertyDefaults

请求链路

sequenceDiagram

participant Client
participant QueuedStatementResource
participant QueuedStatementResource.Query
participant QueuedStatementResource#queries
participant DispatchManager
participant QueuedStatementResource.Query
participant ExecutingStatementResource
participant ExecutingStatementResource#queries
participant io.prestosql.server.protocol.Query
participant Futures

opt send query to queue
Client ->>+ QueuedStatementResource : POST /v1/statement

QueuedStatementResource ->> QueuedStatementResource : postStatement

QueuedStatementResource ->>+ QueuedStatementResource.Query : new
QueuedStatementResource.Query ->>+ DispatchManager : createQueryId
DispatchManager ->>- QueuedStatementResource.Query : QueryId
QueuedStatementResource.Query ->>- QueuedStatementResource : QueuedStatementResource.Query

QueuedStatementResource ->> QueuedStatementResource#queries : put with queryId

QueuedStatementResource ->>+ QueuedStatementResource.Query : getQueryResults

QueuedStatementResource.Query ->>+ QueuedStatementResource.Query : createQueryResults
QueuedStatementResource.Query ->>+ QueuedStatementResource.Query : getNextUri
QueuedStatementResource.Query ->>- QueuedStatementResource.Query : URI (http://localhost:9999/v1/statement/queued/20200906_093843_00041_pg9xm/y3f077eadf2dca6425d56173bac9afea11161cdd0/1)
QueuedStatementResource.Query ->>- QueuedStatementResource.Query : QueryResults

QueuedStatementResource.Query ->>- QueuedStatementResource : QueryResults

QueuedStatementResource ->>+ Response : ok
Response ->>- QueuedStatementResource : Response

QueuedStatementResource ->>- Client : Response
end






opt get query from queue and dispatch it

Client ->> QueuedStatementResource : GET /v1/statement/queued/{queryId}/{slug}/{token}

QueuedStatementResource ->>+ QueuedStatementResource : getQuery
QueuedStatementResource ->>+ QueuedStatementResource#queries : get by queryId
QueuedStatementResource#queries ->>+ QueuedStatementResource : QueuedStatementResource.Query
QueuedStatementResource ->>- QueuedStatementResource : QueuedStatementResource.Query

opt wait for query to be dispatched, up to the wait timeout
QueuedStatementResource ->>+ QueuedStatementResource.Query : waitForDispatched
QueuedStatementResource.Query ->>+ DispatchManager : waitForDispatched
DispatchManager ->>- QueuedStatementResource.Query : ListenableFuture
QueuedStatementResource.Query ->>- QueuedStatementResource : ListenableFuture
end

opt when state changes, fetch the next result
QueuedStatementResource ->> QueuedStatementResource.Query : getQueryResults
QueuedStatementResource.Query ->> QueuedStatementResource.Query : createQueryResults
QueuedStatementResource.Query ->>+ QueuedStatementResource.Query : getNextUri
QueuedStatementResource.Query ->>+ QueuedStatementResource.Query : getRedirectUri
QueuedStatementResource.Query ->>- QueuedStatementResource.Query : URI (http://localhost:9999/v1/statement/executing/20200906_093843_00041_pg9xm/y714bd6e0605f3fdf63d1aac8d10aa92f4c81cf23/0)
QueuedStatementResource.Query ->>- QueuedStatementResource.Query : URI
QueuedStatementResource.Query ->> QueuedStatementResource : QueryResults

QueuedStatementResource ->>+ Futures : transform
Futures ->>- QueuedStatementResource : ListenableFuture
end

opt transform to Response
QueuedStatementResource ->>+ Response : ok
Response ->>- QueuedStatementResource : Response

QueuedStatementResource ->>+ Futures : transform
Futures ->>- QueuedStatementResource : ListenableFuture
end

QueuedStatementResource ->> QueuedStatementResource : bindAsyncResponse
QueuedStatementResource ->> Client : Response

end





opt execute

Client ->>+ ExecutingStatementResource : GET /v1/statement/executing/{queryId}/{slug}/{token}

opt get or recreate query
ExecutingStatementResource ->> ExecutingStatementResource : getQueryResults
ExecutingStatementResource ->>+ ExecutingStatementResource#queries : get by queryId

alt exist
ExecutingStatementResource#queries ->>- ExecutingStatementResource : io.prestosql.server.protocol.Query
else not-exist
ExecutingStatementResource ->>+ io.prestosql.server.protocol.Query : create
io.prestosql.server.protocol.Query ->>- ExecutingStatementResource : io.prestosql.server.protocol.Query
ExecutingStatementResource ->>+ ExecutingStatementResource#queries : computeIfAbsent
ExecutingStatementResource#queries ->>- ExecutingStatementResource : io.prestosql.server.protocol.Query
end

end

opt result
ExecutingStatementResource ->> ExecutingStatementResource : asyncQueryResults
ExecutingStatementResource ->>+ io.prestosql.server.protocol.Query : waitForResults
io.prestosql.server.protocol.Query ->>- ExecutingStatementResource : ListenableFuture
ExecutingStatementResource ->>+ ExecutingStatementResource : toResponse
ExecutingStatementResource ->>- ExecutingStatementResource : Response
ExecutingStatementResource ->>+ Futures : transform
Futures ->>- ExecutingStatementResource : ListenableFuture
ExecutingStatementResource ->> ExecutingStatementResource : bindAsyncResponse
end

end

ExecutingStatementResource ->>- Client : Response
点击这里查看完整大图
PrestoSQL 内部通过 getNextUri 方法,不断地获取下阶段需要调用的接口,推动完成整个请求链路的流转
取消已提交查询的流程与之类似,这里便不赘述
出于渲染复杂度的考虑,更细节的 Session、Transaction、StateMachine、Scheduler 和 Task 等概念,并没有在流程中体现出

踩过的坑

不支持类型的隐式转换

描述

 原本只支持数值类型之间的比较,例如 2 > 1

1
2
3
4
5
6
@ScalarOperator(GREATER_THAN)
@SqlType(StandardTypes.BOOLEAN)
public static boolean greaterThan(@SqlType(StandardTypes.BIGINT) long left, @SqlType(StandardTypes.BIGINT) long right)
{
return left > right;
}

解决

 增加以下方法,以支持字符串与数值之间的比较,例如 '2' > 1

1
2
3
4
5
6
@ScalarOperator(GREATER_THAN)
@SqlType(StandardTypes.BOOLEAN)
public static boolean greaterThan(@SqlType(StandardTypes.VARCHAR) String left, @SqlType(StandardTypes.BIGINT) long right)
{
return Long.parseLong(left) > right;
}
当然也可以显示地调用 cast 函数进行类型转换

compiler message file broken

描述

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ presto-main ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 942 source files to /Users/benedictjin/code/prestosql/presto-main/target/test-classes
compiler message file broken: key=compiler.misc.msg.bug arguments=11.0.5, {1}, {2}, {3}, {4}, {5}, {6}, {7}
java.lang.NullPointerException
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.visitApply(Flow.java:1235)
at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1634)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:49)
at jdk.compiler/com.sun.tools.javac.comp.Flow$BaseAnalyzer.scan(Flow.java:398)
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.visitVarDef(Flow.java:989)
at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCVariableDecl.accept(JCTree.java:956)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:49)
at jdk.compiler/com.sun.tools.javac.comp.Flow$BaseAnalyzer.scan(Flow.java:398)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:57)
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.visitBlock(Flow.java:997)
at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:1020)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:49)
at jdk.compiler/com.sun.tools.javac.comp.Flow$BaseAnalyzer.scan(Flow.java:398)
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.visitMethodDef(Flow.java:964)
at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:866)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:49)
at jdk.compiler/com.sun.tools.javac.comp.Flow$BaseAnalyzer.scan(Flow.java:398)
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.visitClassDef(Flow.java:927)
at jdk.compiler/com.sun.tools.javac.tree.JCTree$JCClassDecl.accept(JCTree.java:774)
at jdk.compiler/com.sun.tools.javac.tree.TreeScanner.scan(TreeScanner.java:49)
at jdk.compiler/com.sun.tools.javac.comp.Flow$BaseAnalyzer.scan(Flow.java:398)
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.analyzeTree(Flow.java:1327)
at jdk.compiler/com.sun.tools.javac.comp.Flow$FlowAnalyzer.analyzeTree(Flow.java:1317)
at jdk.compiler/com.sun.tools.javac.comp.Flow.analyzeTree(Flow.java:218)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.flow(JavaCompiler.java:1401)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.flow(JavaCompiler.java:1375)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:973)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:104)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.handleExceptions(JavacTaskImpl.java:147)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:100)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:94)
at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:126)
at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174)
at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1129)
at org.apache.maven.plugin.compiler.TestCompilerMojo.execute(TestCompilerMojo.java:181)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call(MultiThreadedBuilder.java:190)
at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call(MultiThreadedBuilder.java:186)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

解决

 低版本 JDK11 的已知 Bug,升级至当前最新的版本(JDK11.0.9+7-LTS)后解决,详见:JDK-8212586

Terminating due to java.lang.OutOfMemoryError: Java heap space

描述

1
$ vim etc/jvm.config
1
2
3
4
5
6
7
8
-server
-Xmx128M
-XX:+UseG1GC
-XX:G1HeapRegionSize=4M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+ExitOnOutOfMemoryError
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005

解决

  • Dump 内存占用的情况
1
$ vim etc/jvm.config
1
-XX:+HeapDumpOnOutOfMemoryError
  • 定位出内存 OOM 的原因

    通过 MAT 工具分析发现是 System 的 ClassLoader 使用 java.util.zip.ZipFile$Source 加载 jar 包,并保存在堆内内存上,占用了 70% 多的内存资源。这是 JDK9 中做的一次改动,为了避免调用耗时的 JNI,以及 MMap 出现 Crash 的风险,详见:JDK-8145260

  • 提高分配的内存大小

1
$ vim etc/jvm.config
1
-Xmx256M
内存分配:小于 128MB 时,无法启动;小于 256MB 时,GC 频繁;大于 256MB 时,平稳运行

jmx.properties does not contain connector.name

解决

 JMX Connector 的 connector.name 属于必填字段,给 jmx.properties 增加一行 connector.name=jmx 即可

1
$ vim catalog/jmx.properties
1
connector.name=jmx

社区发展

Star 趋势

Presto Star History

(图片来源:star-history.t9t.io™ 官网)

个人贡献

 详见:《如何成为 Apache 的 PMC

资料

Github

Book

  • Presto: The Definitive Guide

欢迎加入我们的技术群,一起交流学习

群名称 群号
人工智能(高级)
人工智能(进阶)
BigData
算法

欢迎关注我的其它发布渠道