This website uses Cookies. Click Accept to agree to our website's cookie use as described in our Privacy Policy. Click Preferences to customize your cookie settings.
Here’s where you’ll find a buzzing community of Security professionals from around the world with one common mission: bringing their Security platforms to the next level.
Hello!I have a case when I need to store huge amount of data and I need
to partition it by minutes. But BQ supports just 4000 partitions for
table but I need some history for this table. Is it possible to make a
sharded table and for each shard make ...
Hello!I've created a stream from Mysql into BigQuery and this table is
huge enough. Is it possible to make an autopartitioning or set it up in
some way for destination table? Because by default it's not partitioned
Hello!I've set up stream in Datastream with MariaDB (using MySQL
profile) as source and BigQuery as destination. Testing in source
profile was passed. But when I've launched this stream no data wasn't
processed. I've got a zero in processed events.Al...
Hello!Is it possible to make some setting in Datastream for MariaDB like
MaxScale? It needs to improve CDC process
https://mariadb.com/resources/blog/how-to-stream-change-data-through-mariadb-maxscale-using-cdc-api/
Thanks you for your reply.But there are some moments that don't work
with the flow described above.I don't know names and types of meta
columns which Datastream needs in target table. I can find them out
after stream creation. Then I need to stop str...
When I'm creating a stream there is no any option to choose a table but
there is only option to add prefix to dataset. Moreover datastream adds
some meta columns and create structure with types mapping from source
into destination. That's why your ad...
Ok, thanks a lot for explanation. As I understood Data Fusion is a some
kind of ecosystem for creating and maintaining some different ETL/ELT
jobs. Replication is one of its features if you use Data Fusion and
don't want to make a zoo of different to...