docker run -d -p 127.0.0.1:8086:8086 --name influxdb influxdb:1.7.9
docker run -d -p 127.0.0.1:3000:3000 --link influxdb --name grafana grafana/grafana:6.5.1
#!/usr/bin/env python3
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
import cx_Oracle
import time
measurement_name = 'oracle_ash'
interval = 60
oracle_host ='localhost'
oracle_port =1521
oracle_service ='orcl'
oracle_user ='metrics'
oracle_passwd ='metrics'
def dictionary_factory(cursor):
names = [d[0] for d in cursor.description]
def create_row(*args):
return dict(zip(names, args))
return create_row
if __name__ == "__main__":
oracle_conn = cx_Oracle.connect( oracle_user , oracle_passwd , "%s:%s/%s" % (oracle_host, oracle_port, oracle_service) )
cursor = oracle_conn.cursor()
cursor.execute("""
select
sample_id,
sample_time,
session_id,
session_serial#,
session_type,
sql_opname,
sql_id,
event,
wait_class,
wait_time,
session_state,
time_waited,
blocking_session_status,
blocking_session,
blocking_session_serial#
from v$active_session_history
where sample_time > current_timestamp - interval '%d' second
""" % interval)
tags = {
'host': oracle_host,
'db': oracle_service
}
cursor.rowfactory = dictionary_factory(cursor)
for row in cursor:
# InfluxDB does not like spaces in tag names, so we replace them with _
tags['session_state'] = row['SESSION_STATE'].replace(' ', '_')
# set wait_class and wait_event only if session is really waiting
if (row['WAIT_CLASS']):
tags['wait_class'] = row['WAIT_CLASS'].replace(' ', '_')
tags['event'] = row['EVENT'].replace(' ', '_')
# set sql operation name only if session is a sql operation
if (row['SQL_OPNAME']):
tags['sql_opname'] = row['SQL_OPNAME'].replace(' ', '_')
# set sql id if present
if (row['SQL_ID']):
tags['sql_id'] = row['SQL_ID']
if (row['BLOCKING_SESSION']):
tags['blocking_session'] = row['BLOCKING_SESSION']
tags_str = ",".join(["%s=%s" % (k, tags[k]) for k in tags])
fields_str = ",".join(["%s=\"%s\"" % (f.lower(), row[f]) for f in row if row[f] is not None])
influx_time = "%d000000000" % int(time.mktime(row['SAMPLE_TIME'].timetuple()))
print ("%s,%s %s %s" % ( measurement_name, tags_str, fields_str, influx_time ))
Next thing we need is an Oracle-User, which the script will use to access ASH:
sqlplus connect / as sysdba
SQL> create user metrics identified by metrics;
User created.
SQL> grant connect to metrics;
Grant succeeded.
SQL> grant select on v_$active_session_history to metrics;
Grant succeeded.
When executed, the script’s output should now look like this:
$ ./oracle_ash.py
oracle_ash,host=localhost,db=orcl,session_state=WAITING,wait_class=Other,event=os_thread_creation sample_id="272133",sample_time="2019-12-06 13:44:03.569000",session_id="4",session_serial#="27611",session_type="BACKGROUND",event="os thread creation",wait_class="Other",wait_time="0",session_state="WAITING",time_waited="5691",blocking_session_status="UNKNOWN" 1575636243000000000
oracle_ash,host=localhost,db=orcl,session_state=WAITING,wait_class=Other,event=oracle_thread_bootstrap sample_id="272133",sample_time="2019-12-06 13:44:03.569000",session_id="34",session_serial#="6929",session_type="BACKGROUND",event="oracle thread bootstrap",wait_class="Other",wait_time="0",session_state="WAITING",time_waited="19896",blocking_session_status="UNKNOWN" 1575636243000000000
oracle_ash,host=localhost,db=orcl,session_state=ON_CPU,wait_class=Other,event=oracle_thread_bootstrap sample_id="272133",sample_time="2019-12-06 13:44:03.569000",session_id="84",session_serial#="44923",session_type="BACKGROUND",wait_time="644",session_state="ON CPU",time_waited="0",blocking_session_status="NOT IN WAIT" 1575636243000000000
oracle_ash,host=localhost,db=orcl,session_state=WAITING,wait_class=Other,event=ADR_block_file_read sample_id="272109",sample_time="2019-12-06 13:43:39.567000",session_id="84",session_serial#="232",session_type="BACKGROUND",event="ADR block file read",wait_class="Other",wait_time="0",session_state="WAITING",time_waited="10042",blocking_session_status="UNKNOWN" 1575636219000000000
oracle_ash,host=localhost,db=orcl,session_state=ON_CPU,wait_class=Other,event=ADR_block_file_read sample_id="272078",sample_time="2019-12-06 13:43:08.566000",session_id="4",session_serial#="27611",session_type="BACKGROUND",wait_time="999821",session_state="ON CPU",time_waited="0",blocking_session_status="NOT IN WAIT" 1575636188000000000
Now we need to configure Telegraf, which will call our script periodically and feed the data into InfluxDB. This minimal example config will use a database called “telegraf”, which Telegraf will create automatically if it does not exist.
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "60s"
# Configuration for sending metrics to InfluxDB
[[outputs.influxdb]]
urls = ["http://127.0.0.1:8086"]
database = "telegraf"
# Read metrics from one or more commands that can output to stdout
[[inputs.exec]]
## Commands array
commands = [ "./oracle_ash.py" ]
data_format = "influx"
We are ready now to start Telegraf.
telegraf --config telegraf.conf --debug
The –debug switch enables more verbose logging, so we will see right away if anything is wrong.
If everything went ok, InfluxDB is now being fed with ASH data. Let’s check!
There should be a database called “telegraf” containing a measurement with the name “oracle_ash” now.
$ docker exec -ti influxdb influx -database telegraf
Connected to http://localhost:8086 version 1.7.9
InfluxDB shell version: 1.7.9
> show measurements
name: measurements
name
----
oracle_ash
> select count(session_id) from oracle_ash
name: oracle_ash
time count
---- -----
0 3758
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.