Tuesday, May 27, 2008

Planning Apps not visible in Workspace

When we launched the Workspace and went to Navigate -> Applications, no planning application visible. But when we use the EAS console, we can see all the application under the Essbase and they are running.

It might happen if you have re-started the Essbase server.

Please make sure to re-start the Planning application if you re-started the Essbase for some reasons. If you re-start Essbase, but not planning, you may see some weired behaviors.

Sunday, May 25, 2008

Start/Stop Essbase server on Linux

Source the Essbase environment variables.

HYPERION_HOME=/essbase/Hyperion;export HYPERION_HOME
ARBORPATH=/essbase/Hyperion/AnalyticServices;export ARBORPATH
ESSLANG=English_UnitedStates.Latin1@Binary;export ESSLANG
#LD_LIBRARY_PATH=$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386/server:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386/server:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386:$HYPERION_HOME/common/ODBC/Merant/5.2/lib:$ARBORPATH/bin:$LD_LIBRARY_PATH
#Use following if your deployment does not require CDF,CDM,Data mining,Single Sign-on or Triggers
LD_LIBRARY_PATH=$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386/server:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386:$HYPERION_HOME/common/ODBC/Merant/5.2/lib:$ARBORPATH/bin:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH
PATH=$ARBORPATH/bin:$PATH;export PATH
ODBCINI=$HYPERION_HOME/common/ODBC/Merant/5.2/odbc.ini

To start the Essbase server

$./startEssbase.sh -b &

To Stop the Essbase server

$./essmsh -l asadmin
MAXL> alter system shutdown;

Friday, May 23, 2008

PLP_GLRevnGroupAccount_A_Stage_Full Informatica ETL Mapping Slow

In one of the project, we have seen that out of the box mapping PLP_GLRevnGroupAccount_A_Stage_Full in Informaitca was taking really long time. In full ETL, this mapping took almost 30 hours.

After reviewing the Session log, it was found that most of the time was taken in the Reader thread. Since the issue was with the source SQL, the same SQL was run from the SQLPlus and it run in 1 hour only.

After reviewing the informatica configuration, it was found that Informatica server was forced to use some pre-sql to change the Database session settings.

alter session set optimizer_index_cost_adj=1;
alter session set optimizer_index_caching=10;

These session pre-sql change the explain plan for the optimizer and optimizer was not performing well with the Source Query. But it perform well for other mapping.

It was advised to remove these pre-sql at the global level but use at the mapping level if they perform well with these pre-sqls.

Sunday, May 18, 2008

DAC and serverSetupPrompt.sh

There is no DAC Server setup on Unix platform. You need to setup the Analytics client on the Windows Machine first and then move the DAC folder to the Unix platform. After the DAC folder is moved and if you try to run any of the .sh script, you may see some wired behavior. For Example, when you try to run the serverSetupPrompt.sh, you may see the following error.

Exception in thread "main" java.lang.NoClassDefFoundError: com/siebel/etl/gui/login/TextModeServerSetup

If you try to run other .sh file, you may notice that environment variables are not setup correctly.

When you move the DAC folder to Unix, you need to make sure to run the dos2unix utility to remove CR+LF characters from all the .sh files. If you do not run dos2unix utility, then none of the .sh file will work on the Unix platform as expected. You can download the dos2unix utility from http://www.download3k.com/Install-Dos2Unix.html

Qserver is already bound to the port

Sometime when you start the DAC Server on Unix, you will notice the error in the DAC Server log.

Qserver is already bound to the port 3141.

It was observed that when the DAC Server was shutdown on the Linux using the stopserver.sh script, QServer JAVA Process was not went down and it was still running.

When you shutdown the DAC Server, you need to make sure that JAVA Process QServer also went down. If not, then you have to kill the JAVA Process before starting the DAC server again.

MESSAGE:::Cannot run program "pmcmd": CreateProcess error=2, The system cannot find the file specified

When you attempt to register the Informatica servers in DAC and click the "Test" button, the test may fail with the below error.

SEVERE Thu Jan 11 18:52:41 EDT 2008

MESSAGE:::Cannot run program "pmcmd": CreateProcess error=2, The system cannot find the file specified

In order to troubleshoot this error, try to run the pmcmd and pmrep command from the command window and see if you can run those. If not, then it means that PATH environment is not set correctly.

ORA-01652: unable to extend temp segment by 32 in tablespace TEMP while running ETL

You may receive the error ORA-01652: unable to extend temp segment by 32 in tablespace TEMP while running the ETL, specially when tables are huge. If the SELECT statement is running against the tables which have 70-80 million rows, and the optimizer is doing the hash join, it will make use of TEMP and hash-area size to build the hash. If the TEMP tablespace is not big enough, then you may receive this error. There is no one value fit all the sizes, but in our case, with 100GB of space for TEMP tablespace resolve this error. You may have to set this to higher value if you have more than 100 million of rows in the tables.

Do not change the explain plan to make use of NESTED LOOP, because initially optimizer will take more space and time to build the hash but once the hash is built, query results will be much faster and it will perform better than NESTED_LOOP.

Saturday, May 17, 2008

Client Commands are Currently Not Being Accepted

If you receive the error "Client Commands are Currently Not Being Accepted" while performing some operation in the Essbase Database, it seems that some restructuring is happing for the Database. It is possible that some user has started some Calc script, which is forcing the restructure of database, then you cannot run any commands on the database until restructuring completes.

You can replicate this error with the following sequence of command.

Force the Essbase Database Restructure

MAXL> alter database Sample.Basic force restructure;

And then immediately run some command against the database like ESSCMD GETDBINFO or GETDBSTATS and you will see the followinng error.

localhost:Sample:basic:asadmin[1]->getdbinfo
GetDbInfo:


[Sat May 17 12:37:31 2008]localhost/Sample/basic/asadmin/Error(1013204)
Client Commands are Currently Not Being Accepted

Command Failed. Return Status = 1013204

Hyperion Essbase Database Validation

How to check if Essbase database really corrupted?

If the Essbase database really corrupted, then you will not be able to start it up. Database will not be loaded if it corrupted. If the database loaded with no problem, then you can force to restructure the database using the following MaxL command.

alter database Sample.Basic force restructure;

If the restructure complete successfully, then it indicate that database is not corrupted.

NQCMD and Cache

In one of my project, customer did not had the OBIEE Scheduler setup, so iBot was not an option to seed the cache. But still customer want to seed the cache and clear it after the ETL run finish.

NQCMD is one of the option to achive this. We used NQCMD utility to clear the cache and then seed it by running the logical queries for the Reports. For example

working_dir='/oraclebi/product/OracleBI'
cd $working_dir
cd setup
. ./user.sh
. ./common.sh
. ./sa-cli.sh
cd ..
nqcmd -d AnalyticsWeb -u Administrator -p SADMIN -s cmd.txt -o out.txt

In the cmd.txt, use the following.

{call SAPurgeAllCache()};
SELECT Time."Fiscal Quarter" saw_0, "Facts - AR Payments Due Aging"."AR Due1-30 Amount" saw_1, "Facts - AR Payments Due Aging"."AR Due 31-60 Amount" saw_2, "Facts - AR Payments Due Aging"."AR Due 61-90 Amount" saw_3, "Facts - AR Payments Due Aging"."AR Due 90+ Amount" saw_4, "Facts - AR Payments Due Aging"."Total AR Due Amount" saw_5, "Facts - AR Payments Due Aging"."AR Due 1-30 Amount to Total Due %" saw_6, "Facts - AR Payments Due Aging"."AR Due 31-60 Amount to Total Due %" saw_7, "Facts - AR Payments Due Aging"."AR Due 61-90 Amount to Total Due %" saw_8, "Facts - AR Payments Due Aging"."AR Due 90+ Amount to Total Due %" saw_9, "Facts - AR Payments Due Aging"."AR Due Amount to Total %" saw_10 FROM "Financials - AR Overview" WHERE ((Time."Fiscal Quarter" <= VALUEOF(CURRENT_FSCL_QUARTER)) AND (Time."Fiscal Quarter" > VALUEOF(FSCL_QTR_YEAR_AGO))) ORDER BY saw_0;
SELECT Time."Fiscal Year" FROM "Financials - AR Overview" WHERE Time."Fiscal Year" BETWEEN valueof(FILTER_FSCL_FROM_YEAR) AND valueof(FILTER_FSCL_TO_YEAR) ORDER BY 1 DESC;
SELECT Time."Fiscal Quarter" FROM "Financials - AR Overview" WHERE Time."Fiscal Year" BETWEEN valueof(FILTER_FSCL_FROM_YEAR) AND valueof(FILTER_FSCL_TO_YEAR) ORDER BY 1 DESC;

The cmd.txt first clear the cache and then seed it by running all the logical queries for each report customer has.

If you have BI on Unix Platform, you can setup this as Cron job.

Wednesday, May 14, 2008

SIL_GLRevenueFact_GL_Info_Update and SIL_ARTransactionFact_Gl_Info_Update

I have seen some performance issues with SIL_GLRevenueFact_GL_Info_Update and SIL_ARTransactionFact_Gl_Info_Update mapping. The issue is with both Full run and Incremental run. The throughput is really low, around 90-100 rows per second. We have notice this throughput when the target table W_GL_REVN_F has 20-30 million rows.

With 20-30 million rows in the target table, these mapping takes 48-72 hours for the full run.

The only workaround we found was to update the target table directly in the database and setting the DAC task as Completed. You can use MERGE statement in Oracle DB to update the target table, which is more optimized than UPDATE and INSERT combination. For example

MERGE INTO W_GL_REVN_F A USING W_GL_REVN_F_TMP B
ON (A.INTEGRATION_ID = B.INTEGRATION_ID AND A.DATASOURCE_NUM_ID = B.DATASOURCE_NUM_ID)
WHEN MATCHED THEN
UPDATE SET
A.ACCT_DOC_NUM = B.ACCT_DOC_NUM,
A.ACCT_DOC_ITEM = B.ACCT_DOC_ITEM,
A.ACCT_DOC_SUB_ITEM = B.ACCT_DOC_SUB_ITEM,
A.DOC_HEADER_TEXT = B.DOC_HEADER_TEXT,
A.LINE_ITEM_TEXT = B.LINE_ITEM_TEXT,
A.DOC_STATUS_WID = B.DOC_STATUS_WID,
A.POSTED_ON_DT_WID = B.POSTED_ON_DT_WID,
A.POSTED_ON_TM_WID = B.POSTED_ON_TM_WID,
A.GL_RECONCILED_ON_DT = B.GL_RECONCILED_ON_DT,
A.GL_RECONCILED_ON_PROC_WID = B.GL_RECONCILED_ON_PROC_WID,
A.DELETE_FLG = B.DELETE_FLG,
A.W_UPDATE_DT = B.W_UPDATE_DT,
A.ETL_PROC_WID = B.ETL_PROC_WID;


Here W_GL_REVN_F_TMP is the temp table, which was setup and it has the output of the original SIL_GLRevenueFact_GL_Info_Update mapping. This TEMP table was populated from the Source SQL of SIL_GLRevenueFact_GL_Info_Update.

Same solution was used for SIL_ARTransactionFact_Gl_Info_Update.

We used this workaround only for Full ETL. Incremental run is also slow for these 2 mappin but sine the insert/update are not that much, they are executing in our expeted duration of maintenance window.

Cannot transition the state of database [XXX_XXX] between read-write and read-only modes while other users are connected to the database

We have a backup script, which fire every night and make use of ESSCMD commands to take the Essbase File system backup. This script

- Put all the database in read-only mode using BEGINARCHIVE command
- Take the file system backup
- Put all the database in read-write mode using ENDARCHIVE command

We have notice sometime the below error.

[Mon Dec 11 18:18:00 2006]Local/ESSBASE0///Error(1013239)
Cannot transition the state of database [XXX_XXX] between read-write and read-only modes while other users are connected to the database.

This error was due to some Calulation script, end user was running at the time of the backup. Due to that calcualtion script, BEGINARCHIVE was not able to put the database into read-only mode.

I think if you use the LOGOUTALLUSERS command just before the BEGINARCHIVE, it should logout all the users from the system, including the current Calc script running. It should suspend all the operation, users might be running at that point. Though we are educating our users not to run any database restructuring job in the scheduled backup window.

Sunday, May 11, 2008

OBIEE and PeopleSoft

Oracle Business Intelligence (OBIEE) provides out of the box RPD and Webcatalog for Peoplesoft EPM Warehouse. You can download Fusion Intelligence for Peoplesoft 9.0 or 8.9 software from the eDelivery site and this will give you out of the box RPD and Webcatalog. Just deploy this RPD and Webcatalog and change the DB connection in RPD to Peoplesoft EPM Warehouse and you are all set. No need to worry about DAC, Informatica as EPM Warehouse is using the PeopleSoft technology to populate it. You are just using OBIEE as a presentation layer against Peoplesoft EPM Warehouse. It also has the in-built data security.

Fusion Intelligence for Peoplesoft solution is applicable for those customers who already have the EPM Warehouse and just need the OBIEE for Presentation Layer.

Saturday, May 10, 2008

OBIEE and OBIEE Plus

What is the difference between OBIEE and OBIEE Plus?

OBIEE Plus has 4 additional components and they are Hyperion Reporting components.
- Financial Reporting
- SQR Production Reporting
- Interative Reporting
- Hyperion Web Analysis

Server queue full. Returning busy response to new connection

I have seen this error in the SAW Server log and when it happen, SAW server hang-up. When you launch the Analytics URL, Login page does not come up. The only way to bring back the service is to re-start the SAW Server log. The exact error in the SAW server log file is

Type: Warning, Severity: 50
Time: Xxx Xxx 99 99:99:99 2008
File: project/webcomm/socketserver.cpp, Line: 430
Properties:

Location:
saw.rpc.server.socketServer
Server queue full. Returning busy response to new connection


I was able to resolve this issue by tuning the reports. I have notice that some of the reports in the dashboards were not performing well and they were spinning and spinning. Please note that if the report is not coming back and spinning, it occupy the thread of the SAW server and when more and more users access the same reports, it consumes SAW thread and then eventually SAW cannot spawn any more threads to fulfil new request. Either you have to wait until thread become available, means Query return the results or just re-start the SAW Server.

You have to make sure that none of the reports in the dashboards or any adhoc reports taking more time for their execution. Make sure you are using the SAS Server caching so that query does not go to database to fetch the records and results become available from the cache. This way, SAW thread will not be occupied for more time and become available quickly to fulfil new request.

Monday, May 5, 2008

Error Generating Chart

When accessing Dashboards/Reports in Oracle Business Intelligence, you may see the below error.

Error Generating Chart
An error occurred during execution of "connect". Connection refused [Socket:30]
Error Details
Error Codes: ETI2U8FA

The cause of this error is that either the JavaHost process for Chart Server not running, or you have change the JavaHost Port number but forgot to update the instanceconfig.xml file with the new port.

Please note that if you change the JavaHost port number (default 9810) in config.xml file of JavaHost, you also need to update the instanceconfig.xml file of Analytics Web Server with the new port. The new name-value pair you need to add is as follows.

<JavaHostProxy>
<Hosts><Host address="myhost.domain.com" port="newport>" /></Hosts>
</JavaHostProxy>

Sunday, May 4, 2008

Change Chart Server Port for Business Intelligence

If you need to change the Chart Server (JavaHost) port for Oracle Business Intelligence, you need to modify the config.xml file. This file is located in /OracleBI/web/javahost/config folder. Default value is 9810.

<port>9810</port>

If you change the JavaHost port in config.xml file, then you also need to change instanceconfig.xml file so that Analytics web Server know which port the chart server is listening on. Following section in the instanceconfig file explain that JavaHost process running on host myhost.domain.com on port 10053.

<javahostproxy><hosts><host address="myhost.domain.com" port="10053"/></hosts></javahostproxy>

Please note that this parameter specific to OBIEE 10.1.3.2 on Linux and may got changed in earlier or later releases.

Database Session disappear while running Informaitca ETL

Sometime when the Source SQL statement perform very poorly and taking long time, you might experience that Oracle Database Session disappear when querying for v$session table. This usually happen in the Firewall environments.

When there is Firewall between the Informatica and Source Database, you need to make sure that database server is sending probe packets to the client in order for Firewall not to kill the connections due to idle time. Check the expire_time parameter in the SQLNet.ora file on the Database server. Oracle recommended value is 10 min. Default value is 0. When the value of expire_time parameter is set to 10 min, it means that Database server will send the probe packets every 1o min, and Firewall will not kill the client connections in case SQL queries are running very slow and taking hours just to prepare the SQL execution.

If even setting the expire_time in the SQLnet.ora file causing DB session to disappear, check the tcp_keeplive_time parameter on the client machine. Default value for this parameter is 2 hours (7200 seconds) on Linux and Solaris.

$ cat /proc/sys/net/ipv4/tcp_keepalive_time

Set this parameter to less than the firewall timeout value. If the firewall timeout value is 30 min, below statement set tcp_keepalive_time to 15 min on Linux. Use the ndd -set on Solaris to set the same parameter.

$echo 900 > /proc/sys/net/ipv4/tcp_keepalive_time

Optimize Informatica ETL Incremental Mappings

Some of the ETL mapping for Incremental run with EBSO as Source database really runs very slow. The SELECT statement against the source database itself is very slow and with huge database, rows fetching started in 8-10 hours.

Based on our experience we have seen that Incremental SQL queries use the OR operator. Oracle Optimizer perform much better with UNION operator than OR.

In order to optimize Incremental ETL queries against the huge database, test the queries with UNION operator instead of OR.

If UNION also does not perform well, test UNION ALL. UNION ALL will pull the duplicates but you can always remove the duplicates from staging table after the ETL run is done. An example of the SQL to delete the duplicate rows are


DELETE FROM W_SALES_INVOICE_LINE_FS A WHERE ROWID > (SELECT MIN(ROWID) FROM W_SALES_INVOICE_LINE_FS B WHERE A.INTEGRATION_ID = B.INTEGRATION_ID AND A.DATASOURCE_NUM_ID = B.DATASOURCE_NUM_ID)