• 1
  • 2
  • 3
  • 4
  • 5
mysql數據庫問題 首 頁  ?  幫助中心  »  數據庫  »  mysql數據庫問題
PostgreSQL的流式實時統計應用
發布日期:2016-5-2 22:5:15

  PipelineDB是基于PostgreSQL研發的一種流式關系數據庫(0.8.1基于9.4.4),這種數據庫的特點是自動處理流式數據,不存儲原始數據,只存儲處理后的數據,所以非常適合當下流行的實時流式數據處理,例如網站流量統計,IT服務的監控統計,APPStore的瀏覽統計等等。 http://...

  PipelineDB是基于PostgreSQL研發的一種流式關系數據庫(0.8.1基于9.4.4),這種數據庫的特點是自動處理流式數據,不存儲原始數據,只存儲處理后的數據,因此非常適合當下流行的實時流式數據處理,比如APPStore的瀏覽統計,網站流量統計,IT服務的監控統計等等。

  http://www.postgresql.org/about/news/1596/

  PipelineDB, an open-source relational streaming-SQL database, publicly released version (0.7.7) today and made the product available as open-source via their website and GitHub. PipelineDB is based on, and is wire compatible with, PostgreSQL 9.4 and has added functionality including continuous SQL queries, probabilistic data structures, sliding windowing, and stream-table joins. For a full description of PipelineDB and its capabilities see their technical documentation.

  PipelineDB’s fundamental abstraction is what is called a continuous view. These are much like regular SQL views, except that their defining SELECT queries can include streams as a source to read from. The most important property of continuous views is that they only store their output in the database. That output is then continuously updated incrementally as new data flows through streams, and raw stream data is discarded once all continuous views have read it. Let's look at a canonical example:

  CREATE CONTINUOUS VIEW v AS SELECT COUNT(*) FROM stream

  Only one row would ever physically exist in PipelineDB for this continuous view, and its value would simply be incremented for each new event ingested.

  For more information on PipelineDB as a company, product and for examples and benefits, please check out their first blog post on their new website.

  舉例:

  創建動態流視圖,不需要對表進行定義,這類似活生生的NoSQL。參考代碼如下所示:

  pipeline=# CREATE CONTINUOUS VIEW v0 AS SELECT COUNT(*) FROM stream;

  CREATE CONTINUOUS VIEW

  pipeline=# CREATE CONTINUOUS VIEW v1 AS SELECT COUNT(*) FROM stream;

  CREATE CONTINUOUS VIEW

  激活流視圖

  pipeline=# ACTIVATE;

  ACTIVATE 2

  往流寫入數據,參考代碼如下所示:

  pipeline=# INSERT INTO stream (x) VALUES (1);

  INSERT 0 1

  pipeline=# SET stream_targets TO v0;

  SET

  pipeline=# INSERT INTO stream (x) VALUES (1);

  INSERT 0 1

  pipeline=# SET stream_targets TO DEFAULT;

  SET

  pipeline=# INSERT INTO stream (x) VALUES (1);

  INSERT 0 1

  -- 如果不想接收流數據了,停止即可

  pipeline=# DEACTIVATE;

  DEACTIVATE 2

  查詢流視圖,餐阿卡歐代碼如下所示:

  pipeline=# SELECT count FROM v0;

  count

  -------

  3

  (1 row)

  pipeline=# SELECT count FROM v1;

  count

  -------

  2

  (1 row)

  pipeline=#

  在本地虛擬機進行試用

  安裝,參考代碼如下所示:

 

  初始化數據庫,參考代碼如下所示:

  [email protected]> psql -V

  psql (PostgreSQL) 9.4.4

  [email protected]> cd /usr/lib/pipelinedb/bin/

  [email protected]> ll

  total 13M

  -rwxr-xr-x 1 root root 62K Sep 18 01:01 clusterdb

  -rwxr-xr-x 1 root root 62K Sep 18 01:01 createdb

  -rwxr-xr-x 1 root root 66K Sep 18 01:01 createlang

  -rwxr-xr-x 1 root root 63K Sep 18 01:01 createuser

  -rwxr-xr-x 1 root root 44K Sep 18 01:02 cs2cs

  -rwxr-xr-x 1 root root 58K Sep 18 01:01 dropdb

  -rwxr-xr-x 1 root root 66K Sep 18 01:01 droplang

  -rwxr-xr-x 1 root root 58K Sep 18 01:01 dropuser

  -rwxr-xr-x 1 root root 776K Sep 18 01:01 ecpg

  -rwxr-xr-x 1 root root 28K Sep 18 00:57 gdaladdo

  -rwxr-xr-x 1 root root 79K Sep 18 00:57 gdalbuildvrt

  -rwxr-xr-x 1 root root 1.3K Sep 18 00:57 gdal-config

  -rwxr-xr-x 1 root root 33K Sep 18 00:57 gdal_contour

  -rwxr-xr-x 1 root root 188K Sep 18 00:57 gdaldem

  -rwxr-xr-x 1 root root 74K Sep 18 00:57 gdalenhance

  -rwxr-xr-x 1 root root 131K Sep 18 00:57 gdal_grid

  -rwxr-xr-x 1 root root 83K Sep 18 00:57 gdalinfo

  -rwxr-xr-x 1 root root 90K Sep 18 00:57 gdallocationinfo

  -rwxr-xr-x 1 root root 42K Sep 18 00:57 gdalmanage

  -rwxr-xr-x 1 root root 236K Sep 18 00:57 gdal_rasterize

  -rwxr-xr-x 1 root root 25K Sep 18 00:57 gdalserver

  -rwxr-xr-x 1 root root 77K Sep 18 00:57 gdalsrsinfo

  -rwxr-xr-x 1 root root 49K Sep 18 00:57 gdaltindex

  -rwxr-xr-x 1 root root 33K Sep 18 00:57 gdaltransform

  -rwxr-xr-x 1 root root 158K Sep 18 00:57 gdal_translate

  -rwxr-xr-x 1 root root 168K Sep 18 00:57 gdalwarp

  -rwxr-xr-x 1 root root 41K Sep 18 01:02 geod

  -rwxr-xr-x 1 root root 1.3K Sep 18 00:51 geos-config

  lrwxrwxrwx 1 root root 4 Oct 15 10:47 invgeod -> geod

  lrwxrwxrwx 1 root root 4 Oct 15 10:47 invproj -> proj

  -rwxr-xr-x 1 root root 20K Sep 18 01:02 nad2bin

  -rwxr-xr-x 1 root root 186K Sep 18 00:57 nearblack

  -rwxr-xr-x 1 root root 374K Sep 18 00:57 ogr2ogr

  -rwxr-xr-x 1 root root 77K Sep 18 00:57 ogrinfo

  -rwxr-xr-x 1 root root 283K Sep 18 00:57 ogrlineref

  -rwxr-xr-x 1 root root 47K Sep 18 00:57 ogrtindex

  -rwxr-xr-x 1 root root 30K Sep 18 01:01 pg_config

  -rwxr-xr-x 1 root root 30K Sep 18 01:01 pg_controldata

  -rwxr-xr-x 1 root root 33K Sep 18 01:01 pg_isready

  -rwxr-xr-x 1 root root 39K Sep 18 01:01 pg_resetxlog

  -rwxr-xr-x 1 root root 183K Sep 18 01:02 pgsql2shp

  lrwxrwxrwx 1 root root 4 Oct 15 10:47 pipeline -> psql

  -rwxr-xr-x 1 root root 74K Sep 18 01:01 pipeline-basebackup

  lrwxrwxrwx 1 root root 9 Oct 15 10:47 pipeline-config -> pg_config

  -rwxr-xr-x 1 root root 44K Sep 18 01:01 pipeline-ctl

  -rwxr-xr-x 1 root root 355K Sep 18 01:01 pipeline-dump

  -rwxr-xr-x 1 root root 83K Sep 18 01:01 pipeline-dumpall

  -rwxr-xr-x 1 root root 105K Sep 18 01:01 pipeline-init

  -rwxr-xr-x 1 root root 50K Sep 18 01:01 pipeline-receivexlog

  -rwxr-xr-x 1 root root 56K Sep 18 01:01 pipeline-recvlogical

  -rwxr-xr-x 1 root root 153K Sep 18 01:01 pipeline-restore

  -rwxr-xr-x 1 root root 6.2M Sep 18 01:01 pipeline-server

  lrwxrwxrwx 1 root root 15 Oct 15 10:47 postmaster -> pipeline-server

  -rwxr-xr-x 1 root root 49K Sep 18 01:02 proj

  -rwxr-xr-x 1 root root 445K Sep 18 01:01 psql

  -rwxr-xr-x 1 root root 439K Sep 18 01:02 raster2pgsql

  -rwxr-xr-x 1 root root 62K Sep 18 01:01 reindexdb

  -rwxr-xr-x 1 root root 181K Sep 18 01:02 shp2pgsql

  -rwxr-xr-x 1 root root 27K Sep 18 00:57 testepsg

  -rwxr-xr-x 1 root root 63K Sep 18 01:01 vacuumdb

  [email protected]> pipeline-init -D $PGDATA -U postgres -E UTF8 --locale=C -W

  [email protected]> cd $PGDATA

  [email protected]> ll

  total 108K

  drwx------ 5 pdb pdb 4.0K Oct 15 10:57 base

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 global

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_clog

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_dynshmem

  -rw------- 1 pdb pdb 4.4K Oct 15 10:57 pg_hba.conf

  -rw------- 1 pdb pdb 1.6K Oct 15 10:57 pg_ident.conf

  drwx------ 4 pdb pdb 4.0K Oct 15 10:57 pg_logical

  drwx------ 4 pdb pdb 4.0K Oct 15 10:57 pg_multixact

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_notify

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_replslot

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_serial

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_snapshots

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_stat

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_stat_tmp

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_subtrans

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_tblspc

  drwx------ 2 pdb pdb 4.0K Oct 15 10:57 pg_twophase

  -rw------- 1 pdb pdb 4 Oct 15 10:57 PG_VERSION

  drwx------ 3 pdb pdb 4.0K Oct 15 10:57 pg_xlog

  -rw------- 1 pdb pdb 88 Oct 15 10:57 pipelinedb.auto.conf

  -rw------- 1 pdb pdb 23K Oct 15 10:57 pipelinedb.conf

  和流處理相關的參數,例如設置內存大小,是否同步,合并的batch,工作進程數等等。

  pipelinedb.conf

  #------------------------------------------------------------------------------

  # CONTINUOUS VIEW OPTIONS

  #------------------------------------------------------------------------------

  # size of the buffer for storing unread stream tuples

  #tuple_buffer_blocks = 128MB

  # synchronization level for combiner commits; off, local, remote_write, or on

  #continuous_query_combiner_synchronous_commit = off

  # maximum amount of memory to use for combiner query executions

  #continuous_query_combiner_work_mem = 256MB

  # maximum memory to be used by the combiner for caching; this is independent

  # of combiner_work_mem

  #continuous_query_combiner_cache_mem = 32MB

  # the default fillfactor to use for continuous views

  #continuous_view_fillfactor = 50

  # the time in milliseconds a continuous query process will wait for a batch

  # to accumulate

  # continuous_query_max_wait = 10

  # the maximum number of events to accumulate before executing a continuous query

  # plan on them

  #continuous_query_batch_size = 10000

  # the number of parallel continuous query combiner processes to use for

  # each database

  #continuous_query_num_combiners = 2

  # the number of parallel continuous query worker processes to use for

  # each database

  #continuous_query_num_workers = 2

  # allow direct changes to be made to materialization tables?

  #continuous_query_materialization_table_updatable = off

  # inserts into streams should be synchronous?

  #synchronous_stream_insert = off

  # continuous views that should be affected when writing to streams.

  # it is string with comma separated values for continuous view names.

  #stream_targets = ''

  啟動數據庫,可以看到原生是支持postgis的,參考代碼如下所示:

 

  可以看到pipelinedb加入了hll,bloom,tdigest,cmsketch算法,還有很多需要發掘,例如窗口查詢的流視圖,支持grouping set,等等。

  在我自己的筆記本中的虛擬機中的性能測試:

  創建5個動態流視圖,動態流視圖就是不需要建立基表的流視圖。參考代碼如下所示:

  CREATE CONTINUOUS VIEW v0 AS SELECT COUNT(*) FROM stream;

  CREATE CONTINUOUS VIEW v1 AS SELECT sum(x::int),count(*),avg(y::int) FROM stream;

  CREATE CONTINUOUS VIEW v001 AS SELECT sum(x::int),count(*),avg(y::int) FROM stream1;

  CREATE CONTINUOUS VIEW v002 AS SELECT sum(x::int),count(*),avg(y::int) FROM stream2;

  CREATE CONTINUOUS VIEW v003 AS SELECT sum(x::int),count(*),avg(y::int) FROM stream3;

  激活流統計,參考代碼如下所示:

  activate;

  查看數據字典,參考代碼如下所示:

  select relname from pg_class where relkind='C';

  批量插入測試,參考代碼如下所示:

  [email protected]> vi test.sql

  insert into stream(x,y,z) select generate_series(1,1000),1,1;

  insert into stream1(x,y,z) select generate_series(1,1000),1,1;

  insert into stream2(x,y,z) select generate_series(1,1000),1,1;

  insert into stream3(x,y,z) select generate_series(1,1000),1,1;

  測試結果,注意這里需要使用simple或extended , 如果用prepared會導致只有最后一條SQL起作用?,F在不清楚是pipelinedb還是pgbench的BUG。參考代碼如下所示:

  [email protected]> /opt/pgsql/bin/pgbench -M extended -n -r -f ./test.sql -P 1 -c 10 -j 10 -T 100000

  progress: 1.0 s, 133.8 tps, lat 68.279 ms stddev 58.444

  progress: 2.0 s, 143.9 tps, lat 71.623 ms stddev 53.880

  progress: 3.0 s, 149.5 tps, lat 66.452 ms stddev 49.727

  progress: 4.0 s, 148.3 tps, lat 67.085 ms stddev 55.484

  progress: 5.1 s, 145.7 tps, lat 68.624 ms stddev 67.795

  每秒入庫約58萬條記錄,并完成5個流視圖的統計。

  若用物理機的話,估計可以到500萬每秒的級別。后面有時間再試試。

  由于都在內存中完成,所以速度非???。

  pipelinedb使用了worker進程來處理數據合并。

  壓測時的top如下所示:

 

  在寫入 10 億 流數據后,數據庫的大小依舊只有13MB,由于流數據都在內存中,處理完就丟棄了。參考代碼如下所示:

 

  [參考]以下是參考:

  https://github.com/pipelinedb/pipelinedb

  https://www.pipelinedb.com/

后面會更新一些關于mysql的相關知識,關注mysql的敬請期待。

什么行业的讲师最赚钱 大众麻将规则胡图解 单机不联网捕鱼游戏 重庆幸运农场是官方的吗 南粤36选7最新开奖号码中奖详情 英超决赛 陕西心悦麻将下载 捕鱼大师最新官网 哈灵杭州麻将官网 捕鱼王者2017现金版 大众打麻将下载