Qliktionary – a business intelligence dictionary
Application testing, which is a type of testing that includes validating three separate components involved in the creation of an application. These are data, processes, and output.
Artificial intelligence, or AI, refers to the simulation of human intelligence processes with the use of computer systems and other machines.
While similar to artificial intelligence, the term “augmented intelligence” heavily focuses on the fact that simulated intelligence’s main purpose is to improve, not replace, human intelligence. The term, therefore, acts both as a type of categorization as well as a way to calm consumers down – the robots are not taking over.
Baseline testing capabilities act as standardized benchmark tests that can be reused as a testing reference point. This allows for effective quality assurance without constantly setting up new tests.
Large datasets that are commonly analyzed to derive data patterns and trends.
The backbone of blockchain technology is transparency. The goal – to create a validation system that does not belong to one single entity, that is entirely public, and that is trustworthy.
So, how does the underlying technology work? Basically, taking the example of a monetary transaction, blockhain involves validating the user’s transaction using a network of nodes, all identical, that reflect the user’s monetary status. Once verified, a new block is added to the chain, reflecting the purchase. This block is now permanent and unalterable. For a wonderful visual representation of this, head over to Blockgeeks guide.
Business discovery is a term coined by Qlik and refers to the importance of having easy and fast access to business information, making data analysis as efficient as possible for business users.
Business intelligence encapsulates the collection, analysis, and integration of business information. Ultimately, it is a means to use data to make better business decisions.
Chief Data Officer (CDO)
The traditional definition of a chief data officer, or CDO, is the executive in charge of governance and utilizing information, i.e. data. The responsibilities are enterprise-wide and span across activities including data analysis, data processing, and data mining.
However, while this might be the traditional definition of a CDO, there is a new area of responsibility that is beginning to surface – being data-insight-driven. This means focusing more on deriving insight from data, rather than just collecting and storing data. Furthermore, it is important that CDO’s realize the gravity in welcoming and nurturing a data literate culture. To find out more about this, head over to our post “Data literacy – a collection of lessons“.
A graphical representation of aggregated key business figures, representing how the business is doing at a specific point in time.
A metadata management service that acts as a business glossary for your business intelligence solution. It allows you to quickly discover, manage, and understand the data in your solution. Click here to check out NodeGraph’s Data Catalog.
Dark data is data that is being stored but not used. This can lead to unnecessary storing costs and, perhaps more detrimentally, often means that organizations are missing out on important business insights.
Data discovery is a user-oriented process for analyzing big data, facilitating simple and intuitive business intelligence.
According to good old Wikipedia, it’s defined as “[the] process an organization follows to ensure high quality data exists throughout the complete lifecycle”.
A large quantity of raw data. More specifically, this data is enterprise-wide and therefore extremely vast.
Data lineage refers to the origin and transformations that data goes through over time. Basically, data lineage tells the story of a specific piece of data. This allows you to understand where the data comes from as well as when and where it separates and merges with other data. Although there are several ways of representing data lineage, visual representations are most common as they allow for a simpler overview of the data solution in question.
What better way to gain a proper understanding of the term than to consult an expert? As can be explored further in Priyankar Bhunia’s article, James Fisher, VP Global Product Marketing at Qlik, offers his understanding of what the skill entails. He says that:
“Data literacy spans a broad set of processes in the analytics workflow. It is the ability to read, work with, analyse and argue with data.”
Data mining encapsulates the process of looking for new information through the analysis of databases. Essentially, every time you are looking to derive new insights by examining your data, you are data mining.
A data model is an abstract model that collects several data elements and describes how they are connected, as well as how data flows between them.
The perceived appropriateness of data in a specific situation, encapsulating elements such as accessibility, relevance, and timeliness.
Data science is an interdisciplinary field (i.e. a combination of computer science, maths and statistics, and business knowledge) that aims to provide intelligent business insights by studying, understanding, and learning from both structured and unstructured data.
First heard by us during Jordan Morrow’s Qlik webinar on data literacy, this refers to the act of arguing with your data and not being gullible enough to believe everything upfront. Question your data, investigate your data, and stay skeptical. For an on-demand version of the aforementioned webinar, click here.
We also picked up the notion of consumer-focused data trust from the Qlik webinar on data literacy, this time from Jennifer Belissent. We find it noteworthy as data trust is traditionally bundled together with data quality. And, while data quality is an essential part of the mix, Jennifer highlights that data literacy is also needed for the consumer to have the proper ability to trust the data. Basically, data literacy+ data quality = data trust.
Sometimes shortened DW, DWH or EDW (Enterprise Data Warehouse), this is defined as a large quantity of enterprise-wide, structured data. This means that a data warehouse is very similar to a data lake. However, the data in a data warehouse is structured, not raw. And don’t worry, I’m not leaving you there. What is structured data? Well, it means that the data exists within a field and that this field, in turn, exists within a record or file. Basically, by organizing or placing your data in a field, you are structuring it.
This is a term coined by Qlik during their recent BI trends webinar and is best described by Dan Sommer,
“It’s about taking fragmented data, people, and ideas out of their silos and connecting them in agile, innovative, and governed ways — known as the “de-silofication” of data.”
To download “11 BI Trends for 2018”, where the phrase was first introduced, click here.
A description or categorization of data that allows you to isolate specific data and see situations from your chosen perspective.
ETL testing focuses on the quality of each ETL step (i.e. Extract, Transform, and Load) to ensure that no information is lost during the process.
An expression, in the Qlik world, is a formula combining fields and variables in order to calculate a specific value.
Represents a column in QlikView. Directly from the source: “The terms file, field and record are equivalent to table, column and row respectively.”
Functional testing, which is a type of black-box testing that focuses on a specific software component.
Non-functional testing, which involves testing non-functional requirements, such as the way a system operates as a whole.
Broadly speaking, the GDPR surrounds the privacy rights of data subjects, i.e. individuals. The introduction will see a huge increase in the regulations surrounding personal data, with the goal of giving back ownership to individuals.
Want to find out more about each specific element of the GDPR? We’ve got a guide.
A file that is created automatically to trace incidents that occur in an operating system or software run. For example, it might represent the time that your computer was started.
Maintenance testing is used to identify or diagnose equipment problems or confirm that reparations have been successful.
Data describing other data. For more info, head over to our guide “What is metadata? And why does it matter?“.
A data analysis platform. To read more about it, visit Qlik’s landing page.
Another business discovery platform by Qlik. Learn more here.
QlikView Access Point
A website that shows (and gives you access to) all of the QlikView documents that you have permission to access. You need a QlikView Server in order to run the Access Point.
QlikView Publisher is a server-component for the QlikView platform that allows you to load data from different data sources, prepare QlikView documents for use, as well as distribute them to a QlikView Server.
A QlikView Server connects active users to QlikView documents. It uploads QlikView documents into the server memory and calculates and performs according to the directions of said users.
QDF stands for QlikView Deployment Framework and contains standards and best practices for the installation and development of QlikView applications.
Short for Qlik Indexing Engine, QIX is the motor behind QlikView and Qlik Sense.
A yearly event organized by Qlik that NodeGraph is sponsoring.
A QVF is a file format used in Qlik Sense that makes it possible to locally save and move a Qlik Sense application.
A QVD is a Qlik-owned data format that stands for QlikView Document. It makes it possible for you to store data with a high level of compression as well as read data in QlikView. It is only compatible with QlikView applications.
QVS can mean one of two things:
- QlikView Server
- QlikView Script
QVW is a QlikView Worksheet and contains everything you need for a fully-functioning QlikView application.
Regression testing involves testing your solution following new changes to ensure that the old infrastructure remains intact.
A script is a computer language containing a sequence of instructions that is capable of being executed without being compiled, distinguishing it from a program.
In its most basic form, testing entails validating elements of your business intelligence solution as a means of assuring high data quality – securing that actual results match expected results. Testing, as opposed to debugging, involves identifying undetected errors, allowing you to always stay one step ahead. Click here to read the full article “What is testing? (A closer look at data testing)”.
Qlik Sense uses a token-model that allows you to allocate your purchased tokens as you see fit – a diversion from the traditional license model.
If you are new to NodeGraph, first of all – welcome. We are a data quality platform for QlikView and Qlik Sense. To find out how we can help you take control of your data, get started today.