""

Technical

How to convert a Universe to Multi-source in SAP BusinessObjects BI4

The new BI 4 offers a very powerful meta-layer capability: a single Universe can sit on top of several data-sources with great benefits offered by a real-time integration. At a first glance, you might think that existing Universes would need to be typed from scratch, but this article explains how to re-use an existing Universe to provide this highly scalable and expandable meta-layer.

The multi-source Universe

A multi-source Universe is now designed as a project with the following components:

  • Connections
  • Data Foundation
  • Business Layer

These items can be created and configured separately, and then be connected one with another. The cycle of creating a new Universe is easy because the connections, data foundation and business layer can be created intuitively and always using the common SQL language, so no need to know native connection peculiarities. Once built, what happens behind the scenes is transparent to the end user and he/she will see that Business Objects will produce a query which takes pieces of information from different sources in real-time.

However, while the creation process is quite simple when generating a new Universe from scratch, it is not so straightforward if we are migrating from a legacy universe. Let’s see why.

UNV to UNX conversion process

In our experience, the three steps to be completed are the following:

  • Legacy Universe (UNV) import: Using the standard migration process the legacy Universe can be inserted into the new BI4 platform. This can be done in a very short time and it has the following quick advantages:
    • Migrated Web Intelligence reports will still sit on top of this legacy meta-layer.
    • Live Office BI4, Crystal 2011 and other client tools can continue to perform as these are still using this format.

But we still cannot use platform modules like Explorer BI4 or Crystal Enterprise, or use the new security model or the new features advantages of the new Information Design platform, so the natural next step is to enable this.

  • New Universe (UNX) conversion: From the Information Design tool we will click on File, “Convert .unv universe” and a new UNX universe is provided, with a project containing the three main items: Connection, Data Foundation and Business Layer. The advantages are the ones we previously stated, but there is one big disadvantage: The automatically generated Data Foundation is mono-source type, so the resulting Universe will not be scalable, and there is no easy way of turning a Data Foundation from mono to multi-source. Therefore this will need to be re-built. The process for re-building the Universe is explained simply in the following step.
  • New Universe (UNX) multi-source conversion:

A new Data Foundation shall be created, following the steps stated below:

  • Define connections
  • Create new Data Foundation
  • Copy and paste items to the new Data Foundation and/or re-type tables and joins using the standard SQL language.

Also the Business Layer needs changes, basically to be re-pointed to the new Data Foundation. The recommended steps are:

  • Re-point the Business layer to the new Data Foundation
  • The calls from the objects to the tables will need to be re-typed using the standard SQL language

A limitation in this stage is that the useful “View Associated Table” feature that showed the table lineage from a certain object has disappeared, so this might become quite a manual work. Opening the Universe Design tool in paral.lel with the Information Design tool to get the lineage might help here.

Once this is done, verify and export this new universe.

As a final step, the WebI reports can now be re-pointed to the new multi-source UNX so they can be enhanced with new alternative data.

Process summary

See in the following diagram a summary of the process:

  • Step 1: Legacy Universe import
  • Step 2: New Universe UNX conversion
  • Step 3: New Universe UNX multi-source conversion

UNV to UNX conversion process summary

UNV to UNX conversion process summary
UNV to UNX conversion process summary

Conclusion

In the short term, in BI4 it should become a common practice to have 3 versions of the same universe:

  • UNV: To preserve the legacy WebI reports and to use certain client tools like Crystal 2011 or Live Office.
  • UNX mono-source: To use certain platform tools like Explorer or Crystal Enterprise and to have a higher level functionality.
  • UNX multi-source: To use certain platform tools like Explorer or Crystal Enterprise, have a higher level functionality and be able to use several sources in one Universe.

Mid-term only this last multi-source version should remain.

Benefits

This Universe conversion method is time-efficient as it reuses all existing folders and objects, and shows tips for a better Universe re-creation.

The multi-source Universe gives superior benefits to the end user, providing a real-time integration and report design simplicity which will make their life easier. It also helps the meta-layer designers who will see their development time reduced with the help of the new design panel functionalities and a common standard language which is easier to understand. Project managers and architects can also consider the fact that they do not have to build a professional Data Warehouse for their projects, and with all this, IT managers will see a quick ROI and lower TCO for their investments.

If you have questions about this method or about the new Information Design Tool in SAP BI4, or if you want to share your experience or tips, please feel free to leave a comment!

How to deploy SAP BusinessObjects 3.1 Web Applications with IBM Websphere

As we all know Tomcat and IIS are the most commonly used tools to deploy web applications (E.g. InfoView, CMC,..) in SAP BusinessObjects and that this deployment can done automatically through BusinessObjects server installation. However, SAP BO allows you to do perform this with other Applications. In this article I will speak about how we can deploy SAP BusinessObjects Web applications using IBM Websphere.

First off all we have to agree that any web deployment apart from those done with Tomcat and IIS should be done manually.

  • Supported Application

SAP BusinessObjects 3.1 is supported by IBM by using Websphere 6 Express Edition or 6 ND Edition.

  • Installation
    • Make sure that the IBM Wepsphere has been installed successfully in the machine and that all the services are up and running.
    • During the SAP BusinessObjects server installation, when you rich the web deployment part DO NOT SELECT any of the options to deploy Tomcat or IIS, just check the box to deploy the web application manually later.
  • Web configuration file
    • The wdeploy configuration file is:

<BO_insall_dir>deploymentconfig.websphere6.

    • Modify the config.websphere6 file (lines to be modified are in Bold).

config.websphere6 file:

# as_dir: the installation directory of the application server

as_dir=C:Program FilesIBMWebSphereAppServer

# as_instance: the application server instance to deploy to

as_instance=server1

# as_virtual_host: the virtual host the applications will be bound to

as_virtual_host=default_host

# as_soap_port: the SOAP administration port to the administration server.

#   If the value is not set (if the line is commented out), the default value is used.

as_soap_port=8880

# as_admin_is_secure (default: false): is security activated in websphere?

#   Security is activated when an user wishing to log into the admin portal has to provide

#   an username and a password. When secutiry is NOT activated, it is not necessary to

#   provide as_admin_username and as_admin_password (the lines can be commented out)

as_admin_is_secure=false

as_admin_username=admin

#as_admin_password=%AS_ADMIN_PASSWORD%

# ws_instance: the web server instance that will serve the requests, in distributed mode

#ws_instance=webserver1 (TO BE USED IF web server is installed in SPLIT mode)

## Don't remove next line

enforce_file_limit=true

 

  • Command used to deploy the applications

To deploy the web application use the command line (CMD) to write the command in the BO server, the command is:

“wdeploy  config.websphere6 deployall”

This will deploy all the BO Web Applications onto the IBM WebSphere server, the process will take about 20 Minutes to deploy.  17 applications are installed.

  • Deploying web application with websphere administration console

Ensure that your WebSphere web application server is installed, configured and running before deploying WAR files.

  1. Log in to the "WebSphere Application Server Administrative" console using the following URL: http://WAS_HOSTNAME:PORT/admin The WebSphere admin console's default port number is 9060. Give a unique name for your web application and proceed to "Step 2".
  2. Under the Applications heading of the console navigation menu, click Enterprise Applications on the left navigational pane. Highlight the server you created (or highlight server1 if you didn't create your own) from the Clusters and Servers and enable the "Select checkbox". Proceed to "Step 3"
  3. Click the Install button and navigate to the location of the WAR file to deploy. If deploying from a remote file system, select the option "Remote File System". Select the virtual host you created (or default_host if you didn't create your own) from the Virtual Host drop-down list. Proceed to "Step 4".
  4. Enter a context root for the WAR file (e.g. /CmcApp for CmcApp.war) and press the Next button, followed by Continue.
  5.  Review the summary page, and press Finish when done.
  6. Click Save to Master Configuration.
  7. Click the Save link, then the Save button.
  8. Under the Applications heading of the console navigation menu, click Enterprise Applications on the left navigational pane.
  9. Verify that the WAR file was deployed, and then click the Start button. Repeat steps 1-11 for each WAR file to deploy.
  • Test

To test your deployment just open the browser and write the URL (EX. InfoView):

http://”BOservername”:”PortNumber”/InfoViewApp

 

If you have any questions or contributions, please leave a comment below.

Attend the Clariba Webinar "Why Migrate to SAP BusinessObjects BI 4?"

Do you wish to know more about the reasons to migrate to SAP BusinessObjects BI 4, the most advanced Business Intelligence platform?

Attend our Webinar on the 12th of April, from 11:00-12:00 CET (Presented in Spanish)

REGISTER HERE

SAP BusinessObjects BI 4 offers a complete set of functionalities that are key to today´s Business Intelligence market: an improved performance management, reports, search, analysis, data exploration and integration. This new version of SAP´s BI platform introduces several significant improvements to your BI environment, with a great number of functionalities designed to optimize performance.

With this in mind, Clariba invites you to invest an hour of your time to get to know the news and advantages of SAP BusinessObjects BI4, the most advanced BI platform.

The agenda of our webinar is the following:

  • Welcoming and introduction
  • What is new in SAP BusinessObjects BI 4
  • Benefits of migrating to SAP BusinessObjects BI 4
  • Why migrate with Clariba
  • Questions and answers

For more information about SAP BusinessObjects BI 4, visit our website.

Best Regards,

Lorena Laborda Business Development Manager - Clariba

 

Asista al Webinar ¿Porqué migrar a SAP BusinessObjects BI 4?

 

¿Desea conocer más acerca de los motivos para migrar a SAP BusinessObjects BI 4, la plataforma más avanzada de Business Intelligence?

Asista a nuestro Webinar el 12 de Abril de 11.00 a 12:00 (CET)

REGÍSTRESE AQUÍ

 

SAP BusinessObjects BI 4 es la primera y única plataforma de Business Intelligence (BI) que proporciona un completo abanico de funcionalidades claves en el actual mercado de BI: una mejor gestión del rendimiento, informes, consultas y análisis, exploración de datos e integración. Esta nueva versión de la plataforma de SAP introduce avances significativos en su entorno de BI,  con un gran número de prestaciones diseñadas para optimizar su rendimiento.

Con esto en mente, Clariba le invita a invertir una hora de su tiempo para conocer las novedades y ventajas de SAP BusinessObjects BI4, la plataforma más avanzada en el mercado de Business Intelligence.

La agenda para el webinar ¿Porqué migrar a SAP BusinessObjects BI 4? es la siguiente:

  • Introducción y bienvenida
  • Novedades en SAP BusinessObjects BI 4
  • Ventajas de Migrar a SAP BusinessObjects BI 4
  • Porque migrar con Clariba
  • Preguntas y respuestas

Para obtener más información acerca SAP BusinessObjects BI 4, visite nuestro sitio web. Saludos Cordiales,

Lorena Laborda Business Development Manager - Clariba

Applying Custom Security on Web Intelligence Documents

One of the main hurdles that a Web Intelligence Developer has to overcome is how to deal with data security.  Indeed, the search for data security remains the overriding concern for many companies trying to ensure the availability, integrity and confidentiality of the business information, protecting both the database from destructive forces and the unwanted actions or undesired data visualization of unauthorized users. In SAP BusinessObjects we have several ways to set up a security roadmap in terms of authorization data access, but this time I would like to speak about how to use custom security in our WebI documents by using a simple table, joins forced in the Universe Designer and WebI tool in order to show only the data that a user is authorized to see.

We have the following scenario: Imagine that we have a group with different levels of hierarchy in terms of data access levels. The higher you are in the organization more data you have access to. This way, first level within the hierarchy can see all the data, second level in the hierarchy can see his level data and levels below, but won´t have access to first level information, third level can see its own level data and levels below, but won´t have access to second and first level information…and so on.

 

Let´s now see a step by step approach on how to achieve this

First thing to do is an exercise of defining what the hierarchy structure is, specifying each individual´s level and the date he will therefore have access to. After that, we have to create a table in our data base where we will store groups, users and privileges. Key fields for this purpose are:

BO_User:  This will be checked against current users who have accesses to the webi.

Highest Level:  Level in the hierarchy where the user belongs to. For this example we will have 4 levels of organization where 0 is the highest level and 3 is the lowest.

Level_value:  This will be checked against the fact table.

Once we have the table with all the related data already stored it is time to map it in the SAP BusinessObjects meta layer.  For this purpose we have to import the affected universe and create a derived table which will retrieve all the related data for a given user (this means all the data that a user is able to see according to his data access level). The SQL code should be something like below:

SEL       BO_User, Level_Organization_3 FROM  CLARIBA.SECURITY a

LEFT JOIN

(SEL Level_Organization_0 , Level_Organization_1 , Level_Organization_2 , Level_Organization_3 FROM CLARIBA.FACT_TABLE GROUP BY 1,2,3,4) b

ON (

(Highest_Level=0 AND UPPER (a.level_value) = b.Level_Organization_0) OR (Highest_Level=1 AND UPPER (a.level_value) = b.Level_Organization_1) OR (Highest_Level=2 AND UPPER (a.level_value) = b.Level_Organization_2) OR (Highest_Level=3 AND UPPER (a.level_value) = b.Level_Organization_3) )

WHERE security_group=' CLARIBA'

 

This particular derived table will create a couple of objects which will be used in the WebI that we want to secure (BO_User and Level_Organization_3).

The third step is to develop and apply the security in the WebI where we want to carry out the data restriction.  For this purpose we have to create two new dimensions and one detail. Ensure that your query includes those newly created objects.

First task is discovering which users are trying to access the WebI. We can get their login by creating a new dimension named “BO_User” that contains the following formula:

 =CurrentUser()

 

Once we know who is trying to access WebI, we have to control if the BO_User matches with the User name that we had in our table.  We can create a dimension named “FlagBOuser” with the following formula:

=If(Lower([BO_User])=Lower([User Name]);1;0)

 

Next step is to control what level of data access this BO_user will have. In other words we are applying a kind of row/column level security. For this purpose we create a detail object named “Level_Organization” with the following code:

=If([FlagBOUser]=1;[ Level_Organization_3])

 

Once we have these infoobjects, the very last step is to drag and drop both FlagBOuser and Level_Organization as global filters at document level. This way we apply the data restriction to each single data block displayed in the report.

The conditions to be applied are simple: “FlagBOuser” must be equal to 1 meaning that a given user we have corresponds to a user in the database table and “Level Organization” is not null, meaning that we have data to be displayed.

At this point of the exercise, we should be able to restrict data contents displayed in the WebI according to a given user that wants to access it.

Last but not the least we can also control some particular cells such as subtotals information by creating a flag that will ensure only the employees that are allowed to are able to see this content.

=If(Lower([BOUser]) InList ("SilviaR";"JoseY”);1;0)

 

As we have seen in this example, this custom Security in WebI provides an alternative to other types of security that we can apply in our BO system (such as row/level security in Universe Designer).  We can achieve a pretty nice data security solution with simplicity, effectiveness and reduced maintenance requirements.

If you have any questions do not hesitate to leave a comment below.

Implementing Materialized Views in Oracle - Execute queries faster

Let's assume that you've been convinced by Marc's excellent article about the aggregate awareness dilemma, and that after balancing all the arguments you've decided to implement the aggregates in your Oracle database. Two parts are necessary: the materialized views and the query rewrite mechanism.

What is a materialized view?

Think of it as a standard view: it's also based on a SELECT query. But while views are purely logical structures, materialized views are physically created, like tables. And like tables, you can create indexes on them. But the materialized views can be refreshed (automatically or manually, we'll see that later) against their definitions.

Let's imagine the following situation: a multinational company manages the financial accounts of its subsidiaries. For each period (year + month) and for each company, many thousands of records are saved in the data warehouse (with an account code and a MTD (month to date) value). You'll find below a very simplified schema of this data warehouse.

What happens when we want to have the sum of all accounts for each period?

Without a materialized view, all the rows have to be retrieved so that the sum can be calculated. In my case, the following query takes around 2 seconds on my test database. The explanation plan tells me that more than 1 million records had to be read in the first place.

(Query 1)

select p.year, p.month, sum(a.mtd)

from dim_period p

join account_balance a on a.period_key = p.period_key

group by p.year, p.month

So how do you avoid this reading of more than 1 million records? A solution is to maintain aggregate tables in your database. But it means a bigger ETL and a more complex Universe with @aggregate_aware functions. Although this could be a valid option, we've chosen to avoid that..

Another solution is to create a materialized view. The syntax can be quite simple:

(Query MV-1)

CREATE MATERIALIZED VIEW MV_PERIODS

BUILD IMMEDIATE

ENABLE QUERY REWRITE

AS

select p.year, p.month, sum(a.mtd)

from dim_period p

join account_balance a on a.period_key = p.period_key

group by p.year, p.month

Let's go through the query lines.

  • CREATE MATERIALIZED VIEW MV_PERIODS => We simply create the view and give it the name MV_PERIODS.
  • BUILD IMMEDIATE => The materialized view will be built now
  • ENABLE QUERY REWRITE => If we don't specify this, then the materialized view will be created and could be accessed directly, but it wouldn't be automatically used by the query rewriting mechanism.
  • The "as select…" is the same as the original query we made.

You'll notice when executing this query that the time needed to create this materialized view is at least the time needed to execute the sub-query (+ some time needed to physically write the rows in the database). In my case it was 2.5 seconds, slightly more than the original 2 seconds.

If now I re-execute my original query, I get the same result set as before, but instead of 2 seconds I now need 16 milliseconds. So it's now 120 times faster! Oracle understood it could automatically retrieve the results from the materialized view. So it only read this table instead of doing of full read of the fact table.

 

The data freshness

Now imagine a new month is gone, and new rows have arrived in your data warehouse. You re-execute your original select query and at your great surprise, it takes a lot of time: 2 seconds! But why?

It is possible to ask Oracle to tell us if a query was rewritten with a given materialized view, and if not to give us the reasons. Let's see a possible syntax below.

SET SERVEROUTPUT ON;

DECLARE

Rewrite_Array SYS.RewriteArrayType := SYS.RewriteArrayType();

querytxt VARCHAR2(4000) := '

select p.year, p.month, sum(a.mtd)

from dim_period p, account_balance a

where a.period_key = p.period_key

group by p.year, p.month

';

no_of_msgs NUMBER;

i NUMBER;

BEGIN

dbms_mview.Explain_Rewrite(querytxt, 'MV_PERIODS',  Rewrite_Array);

no_of_msgs := rewrite_array.count;

FOR i IN 1..no_of_msgs

LOOP

DBMS_OUTPUT.PUT_LINE('>> MV_NAME  : ' || Rewrite_Array(i).mv_name);

DBMS_OUTPUT.PUT_LINE('>> MESSAGE  : ' || Rewrite_Array(i).message);

END LOOP;

END;

(The sections in red indicate which parts of the query you can update; the rest should stay as is).

Once I executed these lines, I got the following result:

>> MV_NAME  : MV_PERIODS

>> MESSAGE  : QSM-01150: query did not rewrite

>> MV_NAME  : MV_PERIODS

>> MESSAGE  : QSM-01029: materialized view, MV_PERIODS, is stale in ENFORCED integrity mode

(Technical note: to see these lines in the Oracle SQL Developer, you need to activate the DBMS output: menu View / DBMS Output and then click on the button 'Enable DMBS Output for the connection)

The line "materialized view, MV_PERIODS, is stale in ENFORCED integrity mode" means that the materialized view is not used because it does not have the right data anymore. So to be able to use the query rewrite process once again, we need to refresh the view with the following syntax:

BEGIN DBMS_SNAPSHOT.REFRESH('MV_PERIODS','C'); end;

Note that in certain situations, the final users may prefer having the data from yesterday in 1 second rather than the data of today in 5 minutes. In that case, choose the STALE_TOLERATED integrity mode (rather than the ENFORCED default) and the query will be rewritten even if the data in the materialized view is not fresh anymore.

 

Extend your materialized views

Now let's imagine that we want to have not only the account sums by periods, but also by company code. Our new SQL query is the following:

(Query 2)

select p.year, p.month, c.company_code, sum(a.mtd)

from dim_period p, account_balance a, dim_company c

where a.period_key = p.period_key

and a.company_key = c.company_key

group by p.year, p.month, c.company_code

Of course the materialized view MV_PERIODS doesn't have the necessary information (company key or company code) and cannot be used to rewrite this query. So let's create another materialized view.

(Query MV-3)

CREATE MATERIALIZED VIEW MV_PERIODS_COMPANIES

BUILD IMMEDIATE

ENABLE QUERY REWRITE

AS

select p.year, p.month, c.company_code, sum(a.mtd)

from dim_period p, account_balance a, dim_company c

where a.period_key = p.period_key

and a.company_key = c.company_key

group by p.year, p.month, c.company_code

So now our query takes a very short time to complete. But what if, after having deleted the MV_PERIODS materialized view, you try to execute the first query (the one without the companies)? The query rewrite mechanism will work as well! Oracle will understand that it can use the content of MV_PERIOD_COMPANIES to calculate the sums quicker.

Be aware that the query will only rewrite if you had created a foreign key relationship between ACCOUNT_BALANCE.COMPANY_KEY and DIM_COMPANY.COMPANY_KEY. Otherwise you'll get the following message:

QSM-01284: materialized view MV_PERIODS_COMPANIES has an anchor table DIM_COMPANY not found in query.

 

Is basing the materialized view on the keys an option?

The materialized views we've created are very interesting but still a bit static. You may ask yourself: wouldn't have it been a better idea to base the materialized view on the keys? For example with the following syntax:

(Query MV-4)

CREATE MATERIALIZED VIEW MV_PERIODS_COMPANIES_keys

BUILD IMMEDIATE

ENABLE QUERY REWRITE

AS

select period_key, company_key, sum(mtd)

from account_balance

group by period_key, company_key

The answer is "it depends". On the good side, this allows for a greater flexibility, as you're not limited to some fields only (as in the query MV-1 where you're limited to year and month). On the bad side, as you're not using any join, the joins will have to be made during the run-time, which has an impact on the performance query (but even then, the query time will be much better than without materialized views).

So if you want a flexible solution because you don't know yet which are the fields that the users will need, it's probably better to use the keys. But if you already know the precise queries which will come (for example for pre-defined reports), it may be worth using the needed fields in the definition of the materialized view rather than the keys.

If you have any doubts or further information on this topic, please leave a comment below.

Attend the Clariba Webinar "Why Migrate to SAP BusinessObjects BI 4?"

Do you wish to know more about the reasons why to migrate to SAP BusinessObjects BI 4, the most advanced Business Intelligence platform?

Attend our Webinar on the 13th of March, from 11:00-12:00 CET (Presented in Spanish)

REGISTER HERE

SAP BusinessObjects BI 4 offers a complete set of functionalities that are key to today´s Business Intelligence market: an improved performance management, reports, search, analysis, data exploration and integration. This new version of SAP´s BI platform introduces several significant improvements to your BI environment.

With this in mind, Clariba invites you to invest an hour of your time to get to know the news and advantages of SAP BusinessObjects BI4, the most advanced BI solution will provide your company with a great number of functionalities designed to optimize performance and bring you a scalable and secure platform.

The agenda of our webinar is the following:

  • Welcoming and introduction
  • What is new in SAP BusinessObjects BI 4
  • Benefits of migrating to SAP BusinessObjects BI 4
  • Why migrate with Clariba
  • Questions and answers

For more information about SAP BusinessObjects BI 4, visit our website www.clariba.com

Best Regards,

Lorena Laborda Business Development Manager - Clariba

 

Atienda al Webinar ¿Porqué migrar a SAP BusinessObjects BI 4?

 

¿Desea conocer más acerca de los motivos para migrar a SAP BusinessObjects BI 4, la plataforma más avanzada de Business Intelligence?

Asista a nuestro Webinar el 13 de Marzo de 11.00 a 12:00 (CET)

REGISTRESE AQUÍ

 

SAP BusinessObjects BI 4 es la primera y única plataforma de Business Intelligence (BI) que proporciona un completo abanico de funcionalidades claves en el actual mercado de BI: una mejor gestión del rendimiento, informes, consultas y análisis, exploración de datos e integración. Esta nueva versión de la plataforma de SAP introduce avances significativos en su entorno de BI.

Con esto en mente, Clariba le invita a invertir una hora de su tiempo para conocer las novedades y ventajas de SAP BusinessObjects BI4, la más avanzada plataforma en el mercado de Business Intelligence, hará que su compañía se beneficie de un gran número de prestaciones diseñadas para optimizar su rendimiento, ofrecer una plataforma escalable y 100% segura.

La agenda para el webinar ¿Porqué migrar a SAP BusinessObjects BI 4? es la siguiente:

  • Introducción y bienvenida
  • Novedades en SAP BusinessObjects BI 4
  • Ventajas de Migrar a SAP BusinessObjects BI 4
  • Porque migrar con Clariba
  • Preguntas y respuestas

Para obtener más información acerca SAP BusinessObjects BI 4, visite nuestro sitio web www.clariba.com Saludos Cordiales,

Lorena Laborda Business Development Manager - Clariba

Attach a Dashboard Screenshot to an Email with one “click”

It is impressive how far we can get during a project if we try to meet all our customers’ requirements, including those that seem somewhat complicated to solve. During one of our projects in Middle East we received one of such requests. Our customer was asking us to build a functionality to send screenshots of their dashboard by email. Fair enough.

We immediately thought of installing some PDF creator free tool and tell them to print to pdf and then attach the document to the email but there were too many steps according to our customer. We needed to achieve this functionality with a single “click”.

Within a couple of hours and some emails sent to my colleagues Pierre-Emmanuel Larrouturou and Lluis Aspachs, we were then working on a solution meant to work with open source software and free tools that we found on google.

Below are the steps we followed to achieve the goal:

We created the exe file that makes the snapshot and attached it to an email

  • It looks for C:/Temp or D:/Temp folders to save the image
  • It looks for Outlook (Office 2003, 2007 or 2010) both in C:/ and D:/ Drive
  • We added the Xcelsius_burst.bat to skip the windows to authorize the launch of the exe
  • We saved the two files within C:/ Drive but it can be added also to D:. if the user creates a dedicated folder only the .bat file needs to be edited
  • We added the bat file path to a URL button in Xcelsius and run it

Notes: please check your browser options to avoid the bat popups if they are a problem. This version only works if installed within each customer machine. If you want to install it into a server (to avoid the multiple installations) you can create a more complex solution using the Pstools available for free in the network and adding it to your web server (in our case it was tomcat).

 

You can download the files by clicking on the link below. This solution is quite simple but it made our customer quite happy.

Dashboard Burst

 

Just to add more value to the article, there is another way to crack this issue: we are also adding below the latest version of the feature Dashboard_by_email.exe, which allows any screenshot (not only from Dashboards) to be automatically attached to emails. The program needs to run at windows startup and the user can get the screenshot directly attached to his/her email by pressing CTRL+ALT+D. Click on the link below to download.

Dashboard by email

 

We are also aware that the market is now offering add-ons for Dashboard Design which can also meet this and other requirements. You can check out what our friends at Data Savvy Tools (http://datasavvytools.com/) created for dashboard printing. We have tested their component that allows the selection of dashboard components to be printed out (and it´s great).

Let us know your comments and we will be more than happy to discuss these solutions with you.

 

 

SAP Universe Designer Tricks and Tips: Table Mapping

You know everything there possibly is to know about SAP Universe Designer right? Well, I bet that there´s still a trick or two you can discover. For example, there is one function not frequently used in Universe Designer called Table Mapping. This option was originally included in Universe Designer to protect some data to be seen by Developers (the developers´ user group sees the data from a different table than the Business users).

In this article we are going to show how to implement this table mapping feature for the use that it was meant for and we will then apply it in a couple of real life scenarios to provide developers with a simple and effective solution that minimizes the maintenance on their systems.

In order to create a replacement rule, follow the steps below

1. Go to Tools – Manage Security – Manage Access Restriction.

Picture1
Picture1

2. Click New to create the new restriction

Picture2
Picture2

3. Go to Table Mapping and click Add to create the new rule

Picture3
Picture3

4. Fill in the tables you want to replace. In this case, we want the developers to see the data from the table SALES_FACT of the schema DEV_DB instead of PROD_DB (where we are storing the production data).

Picture4
Picture4

5. Click ok, fill in the name for the rule (In this case Developers Sales) and click ok

Picture5
Picture5

6. To apply this rule only to the user group “Developers”, click on “Add user and group”

Picture6
Picture6

7. Select the group IT, and click Ok

Picture7
Picture7

8. Apply the rule to the IT group

Picture8
Picture8

Once we have published the universe, all the reports will change the SQL code automatically between the tables DEV_DB.SALES_FACT and PROD_DB.SALES_FACT depending on the user that is logged into the system.

One important point to take into consideration is the priority of the rules: In case of conflict, the restriction with the highest priority – lowest priority index –  lowest priority number will apply.

The example we reviewed above (dynamic change between developers and business user tables) is the most typical use for this functionality. However, there are some other scenarios where the table replacement could be very useful:

Scenario 1: We are reloading data on a production table. Previously, we have created a copy of the old data in a temporary table that we want to use for reporting while the reloading has not finished.

Solution: We can add a new rule to replace the original table for the temporary one, and apply it to the group “Everyone”. As soon as the reload is completed we can delete the rule. This process is much faster than renaming the table on the universe and to change all the objects that the universe is using on this table.

Scenario 2: We have different fact tables for different departments with the same or similar table structure and  all the dimension tables are common. We are looking for the best solution that reduces future maintenance.

Solution: Instead of creating different copies of the same universe by changing the fact table, we can create one universe and use the table replacement functionality to dynamically switch the fact table depending on the user functional group (in this case would be the department) that the user belongs to.

As we have seen in these examples this table mapping feature provides the developers with simplicity, effectiveness and maintenance reduction on their systems.

If you have any questions do not hesitate to leave a comment below.

Xcelsius in BI on Demand (BIOD)

In this blog article I am going to talk about Xcelsius in SAP BI On Demand (BIOD). What I am going to explain are the steps that you should follow to upload an existing Xcelsius Dashboard to the BIOD system. 

What is BIOD?

First of all, for those who don’t know what is BIOD I will give a brief explanation. Basically we can say that BIOD is the most complete and approachable cloud based business intelligence suite available in the market today. BIOD is software as a service; you do not need to install any software on your machines to get instant value from the system. All you need to do is Log in and provide some data. It is a cheap BI solution, due to the fact that you don’t need to make a huge investment in hardware, licenses, etc... everything is in the net.  The target for this technology is small companies, which are less likely to be able to acquire a BI system due to the costs, but with BIOD they have an accesible  way into SAP BusinessObjects analysis system. In BIOD you are able to create:

  • Xcelisus Dashboards
  • Web Intelligente reports
  • Explorer

You can get more information about BI On Demand here.

 Now, let´s see how to upload an existing XCelsius Dashboard to the BIOD system.

How to upload an Xcelsius Dashboard to BIOD?

First of all, if you don’t have a BIOD account you should create it. It s free, and with it you will able to test most of the features of this cloud system. Click here to Sign up.

Once we are logged, we will see this screen.

Now I want to show you how you should upload an existing Xcelsius file with static data to the BIOD system.

First of all we should create the DataSource, so in my Stuff panel we should select Datasets. After that we click Add New Button  -> Add Dataset

Then we should chose from which place we will select the dataset. We have several options: Create From Query (this option is only available in the BIOD Advanced version, where the connection to a universe is possible), bring data from salesforce or create an empty dataset from scratch and finally, we can upload a file (xls,xlsx or csv) which we will use in this example.

As I said before, we select an excel file as source of our dataset, in the first row of our excel file it is important to have the labels of each column. We can also edit this dataset, change the value type of each column, the label name, etc...

At the bottom of this page we can find the properties section, here we should enable the Web service. Once we have done this, the system will generate a url that will be the reference to the dataset in our dashboard.

The next step will be to upload the Xcelsius file as a template, so we select Add New -> Add Template.

We create a name for this template, uncheck the Create a new Xcelsius file check box and finally, select the xlf file that we have locally.

The screen below will then appear. In order to connect the dataset to our xlf file we should select the blue sentence (you may click here to edit the Xcelsius file). You can also attach an image of the dashboard as a thumbail for repository. The object selection will be fancier.

Once the Xcelsius editor is opened we add a connection using OnDemand -> Add Connection menu option. This will create one Flash Variables connection ([Flashvar]) and two Web Service Connections ([OD_Data1 and OD_Data2). In our case we should delete one data connection because we only have one source, but in case we need more data sources we can create as many as we want. It will also create a new tab in the XC spreadsheet that contains these cell bindings.

After that we configure the data connections. Open the Data Manager (Data -> Connections) and you will see a Connection of type FlashVars.  You should see the following:

  • get_data_url: (mandatory). This should be bound to the cell which the Web Service Url of the Web Service Connections are also bound to. If you have multiple connections this should be bound to the range which holds those connections.

Then each Web Service Connection (OD_DataN), in our case only OD_Data1 points to the set of cells to which that connection outputs its data.

These are the next steps that you should follow in order to setup the dashboard:

  • Click on My Datasets and click copy beside the dataset OD_Data1.
  • Paste the url from dataset to the WSDL URL input box of a Web Service connection

  • Click Import to import the schema.
  • Bind the web service url to the same cell as get data url (Note: if you used the Add Connection process this should already be done).
  • Bind the headers and the row values.
  • Set Refresh on Load to be true.

After these steps you can save your changes and then click the Back button to go back to Edit Connection step of creating a template. You should see your connection listed on the screen.

Click Next to go to the Edit Sample Data step, you can choose to add in your sample data from the XLF if you like, and then click Finish.

Finally we will create a visualization using this template. We select our Data Input, in this case Data Source.

 

If we go to the visualization menu we can find the object.

 

In conclusion we can say that the BIOD system is a nice tool to start testing the power of the SAP solutions without a potential heavy investment at the beginning. It can be also a good tool to make demos on and show our dashboards to customers. It is very interesting to test the explorer tool, you can see the amount of options that the BIOD brings you in terms of data analysis.  If you are interested in the advanced solution you should get in touch with SAP.

If you have any comment or doubts do not hesitate to contact us or leave a comment below.

Tackling the Aggregate Awareness Dilemma

In my last project I faced a situation where the customer asked me about the best option for a particular topic and this time my answer had to be "it depends". As a consultant, my duty was to provide two different options (with their corresponding pros and cons) but I could not make a decision on this, since the answer was highly dependent on the composition of IT service providers in different areas and also their road map. In general BI terms, we could define aggregation as the process of summarizing information at a certain level of detail in order to improve the performance. The Kimball Group defines Aggregate Navigation as the ability to use the right aggregated information and recommends to design an architecture with services that hide this complexity from the end user. In the BusinessObjects world the same concept is called Aggregate Awareness and the database administrators community usually refers to it as query re-write.

In SAP BusinessObjects, this can be achieved through the use of the function @aggregate_aware, contexts and incompatible objects in universe designer. At a database level (RDBMS), certain vendors provide this feature through Materialized Views with query re-write option (Oracle, Sybase, DB2, Informix, PostgreSQL and some others).

So here we have the dilemma: where to place this logic in a customer environment: in the physical layer or in the logical layer?

Both options are valid, but there are some considerations that need to be taken into account from different points of view:

Table Comparison

The information seen in the table above can already be interpreted, but as a summary, my recommendation would be:

Implementing Aggregate Awareness in SAP Business Objects:

  • Ideal for an architecture with many database sources (not all database sources support the query re-write feature and it needs to be maintained in each of them)
  • Good to have if the database vendor may be changed in the future (no changes needed in the universe)
  • No access to strong Database Administrators that can properly tune the database
  • Closed reporting architecture relying on a strong semantic layer in Business Objects
  • There is a need for a centralized metadata repository

Implementing query re-write mechanisms in the RDBMS:

  • Ideal for an architecture with many reporting tools accessing the same database
  • Access to strong Database Administrators
  • It simplifies universe design
  • There is no need for a centralized data repository

If after reading this post you still have doubts on what direction to go for at your company or customer, do not hesitate to contact clariba at info@clariba.com or leave a comment below.