1: Tools for Analysis

This chapter discusses various mathematical concepts and constructions which are central to the study of the many fundamental results in analysis. Generalities are kept to a minimum in order to move quickly to the heart of analysis: the structure of the real number system and the notion of limit. The reader should consult the bibliographical references for more details.

Citation Analysis: Tools for finding who's citing you and calculating journal impact

PART 1: The tools that generate citation analysis: Google Scholar, Web of Science and Scopus

PART 2: The impact tools that utilize the metrics of citation analysis: Publish or Perish (PoP) Journal Citation Reports (JCR) SJR Indicator Altmetrics .

PART 3: Predatory Publishing and Bogus Impact Factors

PART 4: Journal Verification Resources like ULRICH'S WEB

OpenVigil - open tools for data-mining and analysis of pharmacovigilance data

OpenVigil 1 and 2 are software packages to analyse pharmacovigilance (adverse drug event) data. There are several national and international databases of so called spontaneous adverse event reports, e.g., the U.S. american FDA Adverse Event Reporting System (AERS, mostly domestic data) or the WHO Uppsala Monitoring Centre (international). Currently, analyses of FDA AERS (LAERS & FAERS) pharmacovigilance data are available. In addition to U.S. american data, we have also imported German pharmcovigilance data. Data mining features include highly configurable search criteria filters and output filters. Analyses include disproportionality analyses for signal detection like Proportional Reporting Ratio (PRR) calculations. Results can be viewed, sorted and filtered in the webbrowser or saved for further analyses in statistical software packages. Both projects aim at integrating these and other pharmacolovigilance sources to pharmacoepidemiological data like prescription data. OpenVigil 2 is designed for complete case analyses.
OpenVigilFDA is a front-end to the openFDA-interface which is being developed by the FDA since 2014. It allows extraction of the latest reports. Due to technical limitations, the beta-version status and the ongoing changes to the API of openFDA, OpenVigil 2 is more stable and superior for analyses of disproportionality. OpenVigilFDA provides available case analysis, e.g., some records are not complete but still considered.

How can OpenVigil contribute to combat coronaviridae/COVID-19?

Read recent publications on drugs to treat COVID-19 in the further literature section. OpenVigil can both help to prevent adverse drugs reactions of anti-COVID-19 drugs by educating the community about their specific dangers, as well as propose new drugs to interfere with the viral infection or the overreacting immune system response to SARS-CoV-2.

Where can I access OpenVigil?

There are live installations with U.S. american FDA pharmacovigilance data of both versions of OpenVigil with FDA AERS data and OpenVigilFDA freely available at Christian Albrecht University (CAU) of Kiel, Germany:

OpenVigil 2 and OpenVigilFDA are the successors of OpenVigil 1 and use cleansed FDA AERS data. For scientific research on U.S. american data, do not use OpenVigil 1 but only version 2 or OpenVigilFDA!

There is also a version of OpenVigil 1 with German pharmacovigilance data available. Since the national authority (Bundesinstitut für Arzneimittel und Medizinprodukte, BfArM) has stopped providing domestic reports, there will be no updates on this incomplete dataset with data from 2005 to 9/2015:
OpenVigil 1 German:

We are also working on a development version of OpenVigil 1 with Canadian pharmacovigilance data (database is currently sill empty developer wanted):
OpenVigil 1 Canadian:
The German and Canadian pharmacovigilance data are of sufficient quality and do not need further drugname-mapping prior to mining or analysing them.

Where can I download OpenVigil?

You can download the PHP-sources/executables of OpenVigil 1, OpenVigilFDA and OpenVigil 2.1-MedDRA at sourceforge:

Who can be contacted about this project?

The project is maintained by Dr. Ruwen Böhm, specialist for clinical pharmacology, Institute of Experimental and Clinical Pharmacology, UKSH Kiel, Prof. Dr.-Ing. Marina Tropmann-Frick, Department Informatik, Hochschule für Angewandte Wissenschaften Hamburg and Prof. Dr. Hans-Joachim Klein, computer scientist, Institut für Informatik, Christian Albrechts University, Kiel. We can be reached at [email protected]>

The OpenVigil project follows the HONcode and was certified in november 2015. The annual re-certification was made possible by private funding and the kind help of the HON foundation for projects without dedicated budget.

All software uses browser cookies. Cookies are used for the captchas and to store previous queries as convenience for the user (OpenVigilFDA only). Users are not tracked. Emails sent to the projects members are treated confidentially and are neither systematically saved nor used for statistics. Access of all webpages/programs is logged, including your IP address. You can contact us if you wish to see or delete this data.
Cf. the installation overview pages for date of last changes to programs or databases and version numbers and the cave-at documents for general pitfalls.
All software uses brand names which are not specifically identified (e.g., by using ®). Cf. the documentation to understand the difference between drug name and brand name and to learn which output does contain brand names. The authors declare no conflicts of interested as they have no financial or other relation to any of the producers.
Responsible for this website (Impressum / Betreiber der Website): Dr. med. Ruwen Böhm, Institut für Experimentelle und Klinisch Pharmakologie, UKSH Kiel, Hospitalstr. 4, 24105 Kiel, Germany. Tel. +49 431 500 30414, [email protected]>.
The project is funded by public funding via the Christian Albrechts University (CAU) of Kiel, Germany. There is no funding via advertisements.
The OpenVigil project does not produce or gather any of the pharmacovigilance data itself but is dependent on external data sources.
Our software is being developed for physicians, pharmacists and scientists. Due to the origin and nature of the data and the ongoing work on our programs, all results should be considered unvalidated. Especially, any findings must not be used uncritically for therapy changes or legal proceedings. However, these data are well usable for hypothesis generation.
This page was last changed on 2021-05-04.

Pharmacovigilance is defined as the science and activities relating to the detection, assessment, understanding and prevention of adverse effects or any other drug-related problem.

Why do we have pharmacovigilance?

Triggered by the thalidomide (Contergan®) tragedy 1957-1961, various countries have introduced the systematic collection of spontaneous filed reports of adverse events occuring during or after pharmacotherapy. This ongoing monitoring of (newly approved) drugs ensures detection of rarely occuring adverse event and other types of issues with the pharmaceutical product or the patient adhearance to it. So, while clinical trials can contribute to drug safety, pharmacovigilance can improve drug therapy safety!

What type of data is gathered?

Reports can be filed by physicians, pharmacists, pharmaceutical companies and patients. Depending on the domestic laws, it is mandatory for most of these parties to report any observed adverse event. Recent EU directives recommend to gather reports from patients. The quality of the data is thus diverse: Some are unusable due to missing data. One the other hand, reports made by pharmaceutical contain a lot of information due to enforced laws concerning patient security. Most pharmacovigilance databases traditionally contain some basic data on the patient (e.g., gender and age), the adverse event(s) and a list of drugs. Depending on primary data sources (e.g., outpatient or hospital patient) and policy of the agency that is taking care of the database, other data, e.g., indications or laboratory values, can be added.
By the nature of this 'spontaneous collection' these data have to be treated with caution and are generally not suited for hypothesis confirmation but only for hypothesis generation.

How does analysis of pharmacovigilance data contribute to healt care?

Pharmacovigilance data-mining for signals of disproportionate reporting (SDR), i.e., disproportionally stronger associations between drugs and adverse events, is routinely done by the regulatory authorities. However, pharmacovigilance data is not only useful for monitoring new drugs but also for detecting more complex signals, e.g., drug-drug interactions or syndromes or to further analyse known signals and find a especially vulnerable population or mode of application (so called multi-item data mining). Data should be enriched with ontologies for these analyses (e.g., MedDRA, RxNorm, SNOMED, ATC, ICD-10/11).

Where can I extract or analyse pharmacovigilance data?

Open access to pharmacovigilance data is limited. The freedom of information act (US) and similar laws in other jurisdictions have led to the availability of raw data (e.g., FDA AERS datafiles) and new portals to access data (e.g., EMA A list of possible access and analysis options is provied in our resource library page.
However, the open availability combined with the advanced and cleaning, filtering, extraction and analysis capabilities of OpenVigil 2 are unique: All pharmacovigilance research using OpenVigil software is completely transparent and reproducible, thus allowing other scientists to confirm any findings and expand the analyses.

How are statistical signals in pharmacovigilance data detected?

Statistical detection of signals whether a drug-event combination is a putative dverse drug reactions or just a random association can be done using either (i) frequency based methods comparing estimated counts to observed counts for a drug-event-combination like Relative Reporting Ratio (RRR), Proportional Reporting Ratio (PRR) or Reporting Odds Ratio (ROR) or Likelihood Ratio Test (LRT), (ii) Bayesian probabilities like Bayesian conidence propagation neural network (BCPNN) or Poisson-Dirichlet process (DP) or (iii) the (Multi-item) Gamma Poisson Shrinker (GPS/MGPS).
All OpenVigil software provides RRR, PRR and ROR which are similar in magnitude and explanatory power. These measurements of disproportionality are calculated as RRR=DE*N/(D*E), PRR=(DE/D)/(dE/d) and ROR=DE*de/(De*dE). A value of 1 is considered normal background noise. The confidence interval can be estimated using Chi-squared with Yates' correction chisq > 4 or lower bound of the 95% confidence interval (CI) of RRR, PRR or ROR, e.g. for s = sqr( 1/DE + 1/De + 1/dE + 1/de ) for ROR with CI = e ^ ( ln ROR ± 1,96s ) OpenVigil 2.2 will offer MGPS calculations. This signal detection algorithm is especially suited for small numbers of drug-event combinations (DE). Signal detection can be used to find a subgroup of vulnerable patients. By stratifying the reports by age, gender, mode of administration, dosage, indication or other categories, it is possible to identify any confounders and/or vulnerable patients.

What are the usual pitfalls when analysing pharmacovigilance data?

By the very nature of this data collection, it represents only a certain part of the general population in health care (the so-called "open world" problem). Issues like under/over-reporting and counting issues due to multiplicates are summarized in the OpenVigil 1 & 2 cave-at document.
The quality of reports and the verbatim text items, e.g., DRUG.DRUGNAME in the FDA AERS data, require preprocessing of the records and a careful validation of any analysis results. OpenVigil 2 provides cleaning of imported data by using external databases like and user input.
An analysis of pharmacovigilance data can usually not confirm a hypothesis. E.g., you cannot use it proof a certain association. In some situations it might support a hypothesis. Instead, pharmacovigilance data is routinely used to generate a new hypothesis that requires testing in more in silico, preclinical or clinical research, as well as to give therapy guidiance in direct health care.

Which clinical or scientific questions can be addressed by analysing pharmacovigilance data?

Obviously, detecting new adverse drug reactions are the primary reason why pharmacovigilance has been implement and thus they are the most important analysis goal. Other usages include detection of especially vulnerable subpopulations, of harmful excipients/brands, of toxic chemical moieties, of syndromes, of drug-drug-interactions, comparing drugs within drug class and for drug repositioning/repurposing.

How about other usage (e.g., marketing or legal proceedings)?

Because of the limitations of pharmacovigilance data analysis due to the origin and nature of the data, any findings of disproportionality do generally not allow to proof an assumption or to suport a hypothesis. Occassionally, it might be usefull to show that a certain signal was present or not present at a certain date in the past for these purposes. Interpretation of queries requires sound knowledge of statistics, pharmacy, pharmacology and clinical significance of any findings. To fully understand the results, a team combining expertise in these areas is recommended.

Due to the nature of the method of collecting pharmacovigilance data and the nature of the data itself, several precautions need to be taken for high-quality analyses of drugs and their putative adverse drug reactions. This is especially important if you chose to install OpenVigil yourself.

  • Cave at documents: Methodological mistakes when crafting or interpreting queries
    • OpenVigil 1 & 2 caveat v2.0.2 (old version: OpenVigil 1 cave-at v1.0: ) [mirrored from ]
    • WHO UMC caveat [mirrored from ] [mirrored from BfArM cave-at which is now offline ] (instructions for coders which are also useful for decoders) [mirrored from ]
    • Known issues in OpenVigil 1 & 2 and in OpenVigilFDA

    • Imported files and record import failures in OpenVigil 1, e.g., at CAU Kiel:
    • Overview of an installation of OpenVigil 1, e.g., at CAU Kiel:
    • Imported files in OpenVigil 2, e.g., at CAU Kiel:
    • Overview of an installation of OpenVigil FDA, e.g., at CAU Kiel:
    • Data validation, cleansing procedures and quality assessment primer
    • Disproportionality analysis primer
    • Installing OpenVigil 1
    • Installing OpenVigil 2
    • Relational schema for drugs/brands in OpenVigil 2 for SQL queries
    • Interface to RxNorm to map verbatim drugnames to USAN:
    • Suggested software packages for further analysis of the extracted data:
      • Spreadsheet software, e.g. Microsoft Excel, LibreOffice Calc, Gnumeric
      • Database software, e.g. Microsoft Access, LibreOffice Base, MySQL data formatting software like Trifacta Wrangler, OpenRefine
      • Statistical Computing software, e.g. R/RStudio
        • asdfree for FDA data, an importer of FDA LAERS & FAERS data to R
          openFDA data importer to R
          (Old version)
        • USA – FDA: (LAERS) and (FAERS)
          NBER preprocessed AERS files (SAS, Stata, and CSV formats of FDA AERS data offered by the National Bureau of Economic Research)
        • Canada – HC:
        • Germany – BfArm
        • Netherlands - Lareb
        • Europa - EMA
        • Japan - JADER
        • WHO – UMC - Vigibase
        • Search engines based on US FDA pharmacovigilance data
          • eHealthMe:
          • DrugCite:
          • AERS spider:
          • CzeekV:
          • AERSMine:
          • USA - FDA: openFDA API to access LAERS/FAERS
            Experimental front-ends: openFDA demo, and
          • Health Canada:
          • EU ADR reports:
          • German BfArM ADR reports:
          • Dutch Lareb ADR reports:
          • Japanese JADER:
          • Observational Medical Outcomes Partnership (OMOP)

          Common analysis scenarios are depicted here. Please note that our installations of OpenVigil do not use weekly updated data so that monitoring newly approved drugs is usually not readily possible.

          PEST Analysis

          This is a framework you can use to analyze the external environmental analysis. The process entails learning about various external factors which affect the organization.

          It is an acronym of 4 factors. The 4 elements studied in PEST are:

          This factor studies the current political situation. It also includes the potential political influences.

          This factor is about the national and global economy impact.

          This external factor focuses on the ways a society can affect your company.

          This factor discusses the effect of emerging technology.

          Other variations of the PEST analysis are STEP, STEEP, STEEPLE, and PESTLE. Some additional external factors which can be studied are the legal, environmental and ethical factors.

          Load the Analysis ToolPak in Excel

          If you need to develop complex statistical or engineering analyses, you can save steps and time by using the Analysis ToolPak. You provide the data and parameters for each analysis, and the tool uses the appropriate statistical or engineering macro functions to calculate and display the results in an output table. Some tools generate charts in addition to output tables.

          The data analysis functions can be used on only one worksheet at a time. When you perform data analysis on grouped worksheets, results will appear on the first worksheet and empty formatted tables will appear on the remaining worksheets. To perform data analysis on the remainder of the worksheets, recalculate the analysis tool for each worksheet.

          Click the File tab, click Options, and then click the Add-Ins category.

          If you're using Excel 2007, click the Microsoft Office Button , and then click Excel Options

          In the Manage box, select Excel Add-ins and then click Go.

          If you're using Excel for Mac, in the file menu go to Tools > Excel Add-ins.

          In the Add-Ins box, check the Analysis ToolPak check box, and then click OK.

          If Analysis ToolPak is not listed in the Add-Ins available box, click Browse to locate it.

          If you are prompted that the Analysis ToolPak is not currently installed on your computer, click Yes to install it.

          Note: To include Visual Basic for Application (VBA) functions for the Analysis ToolPak, you can load the Analysis ToolPak - VBA Add-in the same way that you load the Analysis ToolPak. In the Add-ins available box, select the Analysis ToolPak - VBA check box.

          The Analysis ToolPak is not available for Excel for Mac 2011. See I can't find the Analysis ToolPak in Excel for Mac 2011 for more information.

          Some languages aren't supported by the Analysis ToolPak. The ToolPak displays in English when your language is not supported. See Supported languages for more information.

          Load the Analysis ToolPak in Excel for Mac

          Click the Tools menu, and then click Excel Add-ins.

          In the Add-Ins available box, select the Analysis ToolPak check box, and then click OK.

          If Analysis ToolPak is not listed in the Add-Ins available box, click Browse to locate it.

          If you get a prompt that the Analysis ToolPak is not currently installed on your computer, click Yes to install it.

          Now the Data Analysis command is available on the Data tab.

          Free data analysis tools are used to analyze data and create meaningful insights out of the data set. These are a set of tools which helps business to create a data-driven decision-making process. Some of the industry known tools that are very popular tools such as, Microsoft excel, tableau public, KNIME, Rattle GUI for R , Talend, H2O, Trifacta, Orange, RapidMiner, Qlikview. These tools are supported with several out of the box features that help in the data analysis process. These data analysis tools are easy to learn and develop the analysis solution very quickly compared to standard programming for data analysis.

          Data Analysis Tools

          Below are the different tools of Data Analysis.

          Hadoop, Data Science, Statistics & others

          1. Excel

          Excel still attracts people to do data analysis and yes it is indispensable still as an analytics tool. There are many free online tutorials available that teach about Excel and VBA through which you can master excel. All the features such as exploring data, summarizing data and visualizing data through various graphical tools are done in excel.

          It is very easy to learn and master excel. Excel is still a basic tool in data science and analytics. Knowledge of excel will help you in your data science career. Though Microsoft Excel is not free, there are similar tools like spreadsheets, open offices and may others in the market which provides the same features as excel. One small drawback of excel is that it can’t be used for very large datasets.

          2. Tableau

          • Tableau is a free tool for data visualization from simple data to complex data. It is kind of interactive and we can suggest labels, tools, size of the column and almost anything we can customize. The drag and drop interface is really helpful in this software and calculations can also be done in Tableau. Anyone who doesn’t have any idea of analytics can see and understand data from the Tableau platform.
          • Dashboards and worksheets are created in Tableau for data analysis and visualization. Tableau helps see data from a different perspective through its dashboards. One can easily enter into the world of data science through Tableau. Also, Tableau integrates with Python and R programming language.

          3. Trifacta

          Trifacta is an open-source tool for data wrangling which makes data preparation easy for data analysis. Trifacta helps to transform, explore and analyze data from raw data format to clean, arranged format. It uses machine learning techniques to help users in data analysis and exploration. The other name of Trifacta is Data Wrangler which makes it clear that it is most useful in data cleaning.

          It was developed in 2012 by Joe Hellerstein, Jeffrey Heer, and Sean Kandel. Trifacta works with the cloud and is collaborated with AWS. It has bagged an award for machine learning deployment from AWS. Trifacta helps you to work with large datasets, unlike Excel. Also, text editing suggestions are incredible in Trifacta.

          4. RapidMiner

          RapidMiner is an integration tool for data preparation, machine learning, deep learning, and other data analysis techniques. The workflow is called processes and the output of one process becomes the input of others. This can be extended via either programming languages or its own plugins. Some versions of RapidMiner are free.

          The products of RapidMiner include RapidMiner Studio, RapidMiner Auto Model, RapidMiner Turbo Prep, RapidMiner Server, and RapidMiner Radoop. We can inspect data by loading data into RapidMiner and do calculations or sort the data inside the tool. RapidMiner is mainly designed for non-programmers. RapidMiner also helps in data cleaning and preparing charts.

          5. Talend

          Talend is an open-source tool for data integration with the help of the cloud. Talend helps to import data and move it to the data warehouse as quickly as possible. Talend has a unified platform. Also, the community of Talend is powerful that you will never know that the person on the other side comes from which background.

          Talend Platforms, Talend enterprise, and Talend Open Studio helps in almost everything related to data that you may not look for another tool once you start working with Talend. Among the three, most used is Talend Open Studio. Collaboration and management of Talend are commendable as with their data integration.

          6. Qlikview

          Qlikview is recommended as the best tool for data visualization. It is faster, easy and unique in nature. There is a community in QlikView which has discussion forums, blogs, and library. Community helps to solve most of your queries. It shows the relationship between data using different colors. Qlikview helps users to make the right decisions from their different approaches of data visualization.

          If you are interested in layout designing, Qlikview is your way to go. It is good to have knowledge of data modeling and SQL basics to be proficient in Qlikview.

          7. Orange

          The orange toolkit can be used as simple data visualization to complicated machine learning algorithms provided it is open source. It can also be used with the Python library. It is like a canvas where the user places the widgets and workflow is created. All the data functionalities are done in widgets canvas. Users can explore various visualization techniques available in the tool.

          There are many add-ons for the Orange tool as it is used in the machine learning algorithm as well. Data mining can also be done in this tool.

          8. H2O

          H2O helps in finding patterns of data. Its applications are mostly in machine learning and artificial intelligence but it provides really good insights about data. H2O has a built-in function to guess the structure of the incoming data set.

          There are also other tools like OpenRefine for sorting and filtering data, Fusion Tables for charts and visualization, Microsoft power BI for data visualization and data wrangling, Google Dashboards to create reports, Plotly for statistical analysis, Gephi for statistical visualization and the tools are many.


          Data analysis can be done easily with a bit of practice. All the tools will not help equally. It is good to select one tool and become a master in that tool. Understanding data is essential to know where we really are in terms of data analysis. Programming is not really important in visualizing and analyzing data. But some tools make you closer to programming.

          Recommended Articles

          This is a guide to Free Data Analysis Tools. Here we discuss the basic meaning and different data analysis tools in detail. You can also go through our other suggested articles to learn more –

          Static vs. Dynamic Analysis

          Static analysis is what it sounds like: an isolated review of the source code. Dynamic analysis, on the other hand, tests code as it is executed on a virtual or even a real machine/processor.

          Think of static analysis as a brush here and dynamic analysis as a fine-toothed comb. It can identify more subtle defects since it reviews how code interacts with other systems, sensors, or peripherals.

          The big difference is that dynamic analysis cannot find flaws in an entire codebase. It can only find issues in excerpts of executed code. Another best practice is to make use of both static and dynamic analysis test methods to produce the most effective and efficient code.

          Arc Flash Evaluation calculates the incident energy and arc flash boundary for each location in a power system. Arc Flash saves time by automatically determining trip times from the protective device settings and arcing fault current values. Incident energy and arc flash boundaries are calculated following the NFPA 70E, IEEE 1584, and NESC standards.

          See how our products can help you save time, ensure, compliance, and save lives. Fault analysis, coordination, and Arc Flash are just a few features in our software suite.

          In excel, we have few inbuilt tools which are used for Data Analysis. But these become active only when you select any of them. To enable the Data Analysis tool in Excel, go to the File menu’s Options tab. Once we get the Excel Options window from Add-Ins, select any of the analysis pack, let’s say Analysis Toolpak and click on Go. This will take us to the window from where we can select one or multiple Data analysis tool packs, which can be seen in the Data menu tab.

          Excel functions, formula, charts, formatting creating excel dashboard & others

          If you observe excel on your laptop or computer, you may not see the data analysis option by default. You need to unleash it. Usually, a data analysis tool pack is available under the Data tab.

          Under the Data Analysis option, we can see many analysis options.

          Unleash Data Analysis Tool Pack in Excel

          If your excel is not showing this pack, follow the below steps to unleash this option.

          Step 1: Go to FILE.

          Step 2: Under File, select Options.

          Step 3: After selecting Options, select Add-Ins.

          Step 4: Once you click on Add-Ins, at the bottom, you will see Manage drop-down list. Select Excel Add-ins and click on Go.

          Step 5: Once you click on Go, you will see a new dialogue box. You will see all the available Analysis Tool Pack. I have selected 3 of them and then click on Ok.

          Step 6: Now, you will see these options under the Data ribbon.

          How to Use the Data Analysis Tool in Excel?

          Let’s understand the working of a data analysis tool with some examples.

          T-test Analysis – Example #1

          A t-test is returning the probability of the tests. Look at the below data of two teams scoring pattern in the tournament.

          Step 1: Select the Data Analysis option under the DATA tab.

          Step 2: Once you click on Data Analysis, you will see a new dialogue box. Scroll down and find the T-test. Under T-test, you will three kinds of T-test select the first one, i.e. t-Test: Paired Two Sample for Means.

          Step 3: After selecting the first t-Test, you will see the below options.

          Step 4: Under Variable 1 Range, select team 1 score and under Variable 2 Range, select team 2 score.

          Step 5: Output Range selects the cell where you want to display the results.

          Step 6: Click on Labels because we have selected the ranges, including headings. Click on Ok to finish the test.

          Step 7: From the D1 cell, it will start showing the test result.

          The result will show the mean value of two teams, Variance Value, how many observations are conducted or how many values taken into consideration, Pearson Correlation etc.…

          If you P (T<=t) two-tail, it is 0.314, which is higher than the standard expected P-value of 0.05. This means data is not significant.

          We can also do the T-test by using the built-in function T.TEST.

          SOLVER Option – Example#2

          A solver is nothing but solving the problem. SOLVER works like a goal seek in excel.

          Look at the below image. I have data of product units, unit price, total cost, and the total profit.

          Units sold quantity is 7550 at a selling price of 10 per unit. The total cost is 52500, and the total profit is 23000.

          As a proprietor, I want to earn a profit of 30000 by increasing the unit price. As of now, I don’t know how much units price I have to increase. SOLVER will help me to solve this problem.

          Step 1: Open SOLVER under the DATA tab.

          Step 2: Set the objective cell as B7 and the value of 30000 and by changing the cell to B2. Since I don’t have any other special criteria to test, I am clicking on the SOLVE button.

          Step 3: The Result will be as below:

          Ok, excel SOLVER solved the problem for me. To make a profit of 30000 I need to sell the products at 11 per unit instead of 10 per unit.

          In this way, we can do the analyze the data.

          Things to Remember

          • We have many other analysis tests like Regression, F-test, ANOVA, Correlation, Descriptive techniques.
          • We can add Excel Add-in as a data analysis tool pack.
          • Analysis tool pack is available under VBA too.

          Recommended Articles

          This has been a guide to Data Analysis Tool in Excel. Here we discuss how to use the Excel Data Analysis Tool along with excel examples and a downloadable excel template. You may also look at these useful articles in excel –

          NDI Import I/O for Adobe CC

          Import your media files captured and recorded from NDI sources into Adobe Creative Cloud software applications from your local drives or across your network using standard storage systems. Once the NDI Import I/O for Adobe Creative Cloud is installed, all Creative Cloud applications that use video will recognize the NDI files as another media option. Simply apply media to your timelines for editing and animation projects. Because NDI files are time-stamped during recording, complex multi-cam editing is an effortless exercise.

          • Compatible with Adobe After Effects CC, Premiere Pro CC, and more…
          • Supports full-resolution, real-time video with audio and alpha channel
          • Enables synchronized multi-cam editing

          Watch the video: Timing analysis with Vivado tools Part 1 (December 2021).