This summary of the video was created by an AI. It might contain some inaccuracies.
00:00:00 – 00:52:43
The video provides a comprehensive tutorial on utilizing SQL extensions in Dynatrace 2.0 for optimized database monitoring and observability. The discourse begins with Dino TR explaining the capabilities of ActiveGate and OneAgent in database monitoring, emphasizing the new Extension Execution Controller in the 2.0 framework for streamlined data collection and ingestion. The tutorial showcases writing custom SQL extensions using Visual Studio Code (VS Code), highlighting features like schema validation, auto-completion, and performance improvements. It demonstrates use cases such as querying business metrics and monitoring multiple database instances, with a focus on flexibility and efficiency in data handling.
Key processes demonstrated include setting up the development environment, creating and testing SQL queries, selecting appropriate schema, and converting query results into metrics. Additionally, the video touches on parsing and naming metrics, handling query execution issues, and aggregating data. Log ingestion is also discussed, detailing necessary attributes such as content, event group, and timestamp, and strategies for handling logs at intervals to avoid database overload.
The tutorial further covers deploying extensions via VS Code, configuring SQL Server endpoints, and ensuring robust data collection and ingestion by using certificates and authentication methods. Emphasis is placed on using diverse data dimensions for accurate insights, building and exporting dashboards, and automating configurations using REST API and JSON files. The video concludes with troubleshooting tips, focusing on enabling logs for self-monitoring, retrieving logs from ActiveGate, and leveraging SQL data source documentation to understand and build extensions effectively. The presenter suggests future sessions for more tailored use cases and invites viewer interactions for continuous learning.
00:00:00
In this segment of the video, the discussion revolves around a practical guide to SQL extensions with Dino TR, aimed at simplifying observability for SQL query-based metrics using the new D 2.0 extensions framework. The video provides an introduction to how D currently monitors databases through OneAgent and ActiveGate. OneAgent instruments applications to detect database calls and performance, while ActiveGate connects remotely to execute SQL queries and ingest various data into D. The new 2.0 framework operates on ActiveGate and features an Extension Execution Controller for running extensions, collecting data from target databases via JDBC, and ingesting the data back into the D environment. Existing extensions for popular databases like Oracle, SAP Hana, SQL Server, PostgreSQL, DB2, MySQL, and Snowflake are already available on the Hub, offering robust performance capabilities.
00:05:00
In this part of the video, the speaker discusses the out-of-the-box database monitoring capabilities and introduces a new app called App Spotlight, which allows for centralized database monitoring and observability with features like statement performance analysis. The segment then shifts to explain how to write a custom SQL extension using the WS Code add-on, which simplifies development with features like schema validation, auto-completion, testing, and deployment. With the SQL 2.0 data source, users can create custom queries, define execution frequencies, build generic topology entities, and ingest both metrics and logs. There are significant performance improvements in this new release, enabling parallel execution for better efficiency. The speaker also highlights use cases for the SQL extension, such as querying business metrics and monitoring multiple database instances. Overall, the enhancements allow for seamless integration and improved data analysis capabilities.
00:10:00
In this segment of the video, the speaker elaborates on the primary use cases for SQL extensions, emphasizing their importance in monitoring specific business data within databases. The speaker explains that, while some types of monitoring are available out-of-the-box, SQL extensions allow users to easily create custom solutions for data collection and integration into Dynatrace. The process for developing a SQL Server extension is demonstrated using Visual Studio Code (VS Code). The steps include installing a necessary add-on, initiating the development environment, choosing a schema, and configuring the environment with certificates. A simple SQL Server instance example is provided, showing a table named “my stats” which stores data generated every minute by a SQL Server job. The speaker outlines the goal of collecting three types of data metrics from this table: current metric values, batch collection, and aggregation queries.
00:15:00
In this part of the video, the speaker delves into writing SQL queries to analyze database executions. They run a select query ordered from last to first to see new rows appearing predictably every minute. They then create a query to return a single row for each minute, ensuring it’s the latest, and convert this data into a metric. The speaker tests the query, confirms it works, and emphasizes the importance of explicitly specifying details such as the query’s frequency and data source to ensure clarity for future users. They also discuss parsing the returned result set and assigning meaningful names to metrics, confirming the correctness of their queries with tests.
00:20:00
In this part of the video, the speaker demonstrates how to handle and execute SQL queries within an extension. They explain that a query will be executed every minute, and the results will be converted into metrics and ingested into the environment. If the query encounters issues, it will log warnings instead of crashing, indicating either empty results or other problems such as connection issues or permission errors.
The speaker then switches to executing more complex queries for aggregated data over the last 10 minutes, retrieving the number of rows inserted and the total value of status codes. They discuss the flexibility of adding dimensions for further analysis and how to create and map new metrics from query results.
Additionally, the video covers creating a new query to execute every 10 minutes, converting columns from the result set into metrics, and handling cases where some columns may not be needed. The speaker also addresses specifying default values when a query returns empty results, depending on the database syntax used, like SQL Server’s IFNULL syntax.
00:25:00
In this part of the video, the speaker discusses specifying a constant value for a dimension in a demonstration table, using a `con` keyword to assign names and values such as ‘stat name’ set to ‘b’. They explain retrieving dimension names from result sets if suitable values are available. They move on to implementing log ingestion using SQL, demonstrating with a prepared query that retrieves two columns, ‘Tim stamp’ and ‘message’. The unique messages complicate metric dimensions due to potential value limits. They propose ingesting message logs with timestamps, handling null or empty messages by returning a constant value for error analysis. Log ingestion at intervals, typically five minutes, is suggested to avoid overwhelming databases. Finally, they describe using VS Code to identify missing attributes needed for the log ingestion process.
00:30:00
In this part of the video, the speaker discusses essential attributes for building a log extension. The first attribute is the “Content” attribute, which represents the main content of the log message. The second key attribute is “event group,” which helps distinguish between different log ingestion queries associated with the same extension. Another important attribute is the “timestamp,” which denotes when the log was created. The speaker explains that specifying metrics for the extension can help the environment understand the metrics the extension provides beforehand. The typical metrics mentioned include count, batch total, and batch size.
Additionally, the speaker clarifies that while it’s typical to specify a timestamp, the system can automatically assign the current timestamp if omitted. They also highlight the structure and grouping of queries, emphasizing that the extension can handle up to 10 groups, each with up to 20 subgroups, for performance optimization. The segment concludes with the speaker building and deploying the extension using VS Code, ready to collect and retrieve data.
00:35:00
In this part of the video, the presenter demonstrates how to upload and activate a new extension on a tenant. This includes connecting to the Dietra environment via VS Code, deploying required certificates, and configuring the SQL Server endpoint. They provide details about setting up the connection, including specifying the data source, using basic authentication, and handling ports. The presenter also explains the activation process, which involves deploying the extension archive to an active GATE and starting data collection. Additionally, they answer questions about how SQL queries are processed, noting that each row in a query result is converted into a distinct data point.
00:40:00
In this part of the video, the speaker discusses the importance of using diverse column values as dimensions in data to avoid redundancy and enhance data distinction. The segment covers building and exporting dashboards as JSON, which can be deployed seamlessly across different environments using extensions. It highlights configuring alerts similarly, specifying paths to JSON for automated activation. Features in 2.0 extensions include creating typologies and unified analysis screens, which can link entities and charts, allowing for complex extensions. When deploying an extension, initial errors may occur but are typically resolved as data pooling starts. The video emphasizes enabling logs for self-monitoring and mentions using REST API to automate the creation of monitoring configurations from CSV files, facilitating the automatic setup of extensive data points.
00:45:00
In this segment of the video, the speaker is inspecting various metrics using a data explorer to ensure that SQL Server job executions are successful. They verify metrics executed every 10 minutes and logs ingested every 5 minutes. They demonstrate filtering data by event group and extension name, confirming data retrieval and ingestion. The speaker explains that this data can be used in new notebooks, dashboards, and automation workflows. Additionally, they discuss that the SQL extensions work on both managed and self-service platforms. The segment concludes with an introduction to a forthcoming app designed to simplify creating SQL extensions within the platform UI.
00:50:00
In this part of the video, the speaker discusses troubleshooting essential aspects when connecting to remote data sources. They emphasize the importance of having logs enabled in the environment for self-monitoring and troubleshooting. Specific paths for retrieving logs directly from the active GATE are mentioned for both Windows and Linux. Detailed information about the active GATE support archive, fast check, and D Source SQL files is provided for in-depth technical traces. Furthermore, the speaker shares valuable links to SQL data source documentation and an Extensions 2.0 observability clinic to aid in understanding and building 2.0 extensions with VS Code. The segment closes with an appreciation of the presentation and suggestions for future sessions focused on specific use cases and tips for leveraging SQL extensions. Viewers are encouraged to ask questions in the community or the comments section.