PostgreSql or protocol-compatible database (like Amazon Redshift) can be used as a data source with SQL-compatible database connector.
There are no any limitations on the dataset size; your PostgreSql should be able to execute aggregate queries fast enough (in seconds; 2 minutes max). In case of large datasets you may pre-aggregate the data with materialized view, apply some filters on indexed columns, or use PipelineDB extension for real-time aggrations.
Connection String should be a valid connection string for NpgSql driver; for example:
Specifies the host name of the machine on which the PostgreSql is running.
Do not use "localhost" or LAN server name; use only public IP address or server's domain name.
|Database||The PostgreSQL database to connect to.|
|User ID||The username to connect with.|
|Password||The password to connect with.|
|Trust Server Certificate||Specify
|Server Compatibility Mode||Specify
Trust Server Certificate=Trueto the connection string to disable SSL certificate validation (needed if your server uses self-signed certificate).
some_column < 5this means that
some_columnhas TEXT datatype. To fix this go to the cube configuration form, find the dimension with Name "some_column" and add a Parameter to define SQL expression with a cast:
some_column="2019-05-15"this means that column's data type is Date (or DateTime) and it cannot be compared with a TEXT value. To fix this go to the cube configuration form, find the dimension with Name=some_column and add a Parameter to define SQL expression with a cast; for example:
ConvertInfinityDateTime=Trueto the connection string; if this doesn't help you need to exclude unpresentable timestamps from the query results (either by WHERE filtering or by timestamp normalization with an SQL expression).