How can I get the cost of a Postgres query that's listed in a log file?

We have an application that queries a Postgres DB.  The tables are partitioned into Postgres schema(s), which are simply name spaces for tables.  I'm tasked with getting some sort of cost ( dollars, cpu, memory, anything ) for executed queries on a per schema basis.  Looking through all the log files, the only way I see to do this is by finding log files that have logged the Postgres query, and within that query the schema(s) is listed as part of the query.  That's what I use to identify which query accessed which schema.

Now, using Logs Explorer, I'm seeing these Postgres queries list in log files having either "data_access" or "postgres_log" in their logNames.  I'm not sure why I am seeing the queries in either one or the other or what the difference is.  But each of these has it's own fields describing the logged action.  Neither has something to hang my hat on as to the amount of resources the query took.  Gemini gives me various methods that aren't consistent or accurate.  So my questions are..

Knowing the difference between "data_access" and "postgres_log" named files may be helpful.

Is the best that I can do is to simply count the number of queries referencing a schema via the protoPayload.request.statement, or similar, query string?  Isn't there a join field ( insertId? ) that can be use to get some amount of  effort used for the query? ( I can't find it ) 

Is there a way to adjust Googles log files to always log a a cost ( of any type )?  chunkCount isn't always there and regardless it is always "1" for me.  I see reference in the docs to statement_latency_ms but never actually find it.  Lots of docs talk about BigQuery queries, but my are queries to a application's PG database, not the cost of BQ queries.

Am I SoL?  If there's a guru to ask about this we can hire him but any advice is appreciated.

Thanx!

0 0 70
0 REPLIES 0
Top Solution Authors