Thursday, October 9, 2008

Excel Add-in crash with Excel 2007 on Win XP SP3

We oberverved one issue with Excel Add-in and Excel 2007 on XP SP3.

1. Open Excel and connect to the Essbase server.
2. Do Retrieve
3. Double click on any member. Closes the whole excel.
4. Appears the following message:

" The Microsoft Office Excel found a problem and need be closed. We regret the incovenience"

Issue specific to Windows XP SP3.

Patch 7317818 and 7035861 resolve this issue. The fix is available in 9.3.1.2.6.

Friday, October 3, 2008

Essbase crash on Linux

We noticed several crash of Essbase process (ESSSRVR) on Linux, specially when users are doing restructuring and running calculations. Call Stack in the exception log look like this.


Process Type: Application
Application Name: XXXXX
Database Name: XXXXX
Exception Log File: /essbase/Hyperion/AnalyticServices/app/XXXX/XXXX/log00001.xcp
Current Thread Id: -1499931744
Signal Number: 0x11=Segmentation Violation
Signal Code: 0x-168009176=Unknown



----- Stack Trace -----



[0] /essbase/Hyperion/AnalyticServices/bin/libessutlu.so(adXcptWriteStackInfo+0x3e) [0xf73ad7de]
[1] /essbase/Hyperion/AnalyticServices/bin/libessutlu.so(adXcptCreateRpt+0x224) [0xf73ac8cc]
[2] /essbase/Hyperion/AnalyticServices/bin/libessutlu.so(adXcptHandlerDiag+0x83) [0xf73ac62b]
[3] /essbase/Hyperion/common/JRE/Sun/1.5.0/lib/i386/server/libjvm.so [0xf5e2ae4d]
[4] /essbase/Hyperion/common/JRE/Sun/1.5.0/lib/i386/server/libjvm.so [0xf5e28c7f]
[5] /essbase/Hyperion/common/JRE/Sun/1.5.0/lib/i386/server/libjvm.so(JVM_handle_linux_signal+0x1d8) [0xf5e2b6b8]
[6] /essbase/Hyperion/common/JRE/Sun/1.5.0/lib/i386/server/libjvm.so [0xf5e28ba4]
[7] [0xffffe600]
[8] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(adCalmExec+0x64e) [0xf7bf4a5a]
[9] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(adCalcCSCR+0x5fe) [0xf7bcb4b6]
[10] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(adCalcCSCR+0x2040) [0xf7bccef8]
[11] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(adCalcCSCR+0x6eb) [0xf7bcb5a3]
[12] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(adCalcScrCalc+0xf5) [0xf7bcad01]
[13] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(EpXSvrCalcExec+0x484) [0xf7d987dc]
[14] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so [0xf7d9ce47]
[15] /essbase/Hyperion/AnalyticServices/bin/libesssvrd.so(EpXSvrDispatch+0xf8) [0xf7d98350]

We noticed that the VM for the ESSSRVR process was very high. Even for Sample application, VM was around 1.5G. That pretty high. Since Essbase is 32-bit application, it can only address 2GB of memory. Since restructuring/calc need more memory, chances are that process can go beyond 2GB, hence crash.

We changed one of the parameter in Essbase.cfg file. Set the JvmModuleLocation to blank.

; JvmModuleLocation /essbase/Hyperion/common/JRE/Sun/1.5.0/lib/i386/server/libjvm.so
JvmModuleLocation

After this, ESSSRVR process will not load the JVM, hence the VM for the process will be low. After this change, VM for sample application was around 350M as compare to 1.5G. This will give sufficient room for the process to grow when users are doing restructuring/calc.

After this change, we did not notice the similar crash.

Wish we had Essbase 64-bit on Linux available so that we will not hit this limitation.

Sunday, September 28, 2008

BI Publisher Logging fail in 10.1.3.2

In out of the box OBIEE/BI Publisher configuration, logging to BI Publisher fail with default user Administrator/Administrator.

In default configuration, it was observed that the authentication mechanism was set to OBIEE Server, instead of BI Publisher. Configuration file of BI Publisher is /oraclebi/product/OracleBI/xmlp/XMLP/Admin/Configuration. tag in this file determines the authentication mechanism for BI Publisher. Default value is BI_SERVER.

In order to use BI Publisher authentication mechanism, set SECURITY_MODEL tag value to XDO.

Tuesday, May 27, 2008

Planning Apps not visible in Workspace

When we launched the Workspace and went to Navigate -> Applications, no planning application visible. But when we use the EAS console, we can see all the application under the Essbase and they are running.

It might happen if you have re-started the Essbase server.

Please make sure to re-start the Planning application if you re-started the Essbase for some reasons. If you re-start Essbase, but not planning, you may see some weired behaviors.

Sunday, May 25, 2008

Start/Stop Essbase server on Linux

Source the Essbase environment variables.

HYPERION_HOME=/essbase/Hyperion;export HYPERION_HOME
ARBORPATH=/essbase/Hyperion/AnalyticServices;export ARBORPATH
ESSLANG=English_UnitedStates.Latin1@Binary;export ESSLANG
#LD_LIBRARY_PATH=$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386/server:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386/server:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386:$HYPERION_HOME/common/ODBC/Merant/5.2/lib:$ARBORPATH/bin:$LD_LIBRARY_PATH
#Use following if your deployment does not require CDF,CDM,Data mining,Single Sign-on or Triggers
LD_LIBRARY_PATH=$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386/server:$HYPERION_HOME/common/JRE/Sun/1.5.0/lib/i386:$HYPERION_HOME/common/ODBC/Merant/5.2/lib:$ARBORPATH/bin:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH
PATH=$ARBORPATH/bin:$PATH;export PATH
ODBCINI=$HYPERION_HOME/common/ODBC/Merant/5.2/odbc.ini

To start the Essbase server

$./startEssbase.sh -b &

To Stop the Essbase server

$./essmsh -l asadmin
MAXL> alter system shutdown;

Friday, May 23, 2008

PLP_GLRevnGroupAccount_A_Stage_Full Informatica ETL Mapping Slow

In one of the project, we have seen that out of the box mapping PLP_GLRevnGroupAccount_A_Stage_Full in Informaitca was taking really long time. In full ETL, this mapping took almost 30 hours.

After reviewing the Session log, it was found that most of the time was taken in the Reader thread. Since the issue was with the source SQL, the same SQL was run from the SQLPlus and it run in 1 hour only.

After reviewing the informatica configuration, it was found that Informatica server was forced to use some pre-sql to change the Database session settings.

alter session set optimizer_index_cost_adj=1;
alter session set optimizer_index_caching=10;

These session pre-sql change the explain plan for the optimizer and optimizer was not performing well with the Source Query. But it perform well for other mapping.

It was advised to remove these pre-sql at the global level but use at the mapping level if they perform well with these pre-sqls.

Sunday, May 18, 2008

DAC and serverSetupPrompt.sh

There is no DAC Server setup on Unix platform. You need to setup the Analytics client on the Windows Machine first and then move the DAC folder to the Unix platform. After the DAC folder is moved and if you try to run any of the .sh script, you may see some wired behavior. For Example, when you try to run the serverSetupPrompt.sh, you may see the following error.

Exception in thread "main" java.lang.NoClassDefFoundError: com/siebel/etl/gui/login/TextModeServerSetup

If you try to run other .sh file, you may notice that environment variables are not setup correctly.

When you move the DAC folder to Unix, you need to make sure to run the dos2unix utility to remove CR+LF characters from all the .sh files. If you do not run dos2unix utility, then none of the .sh file will work on the Unix platform as expected. You can download the dos2unix utility from http://www.download3k.com/Install-Dos2Unix.html

Qserver is already bound to the port

Sometime when you start the DAC Server on Unix, you will notice the error in the DAC Server log.

Qserver is already bound to the port 3141.

It was observed that when the DAC Server was shutdown on the Linux using the stopserver.sh script, QServer JAVA Process was not went down and it was still running.

When you shutdown the DAC Server, you need to make sure that JAVA Process QServer also went down. If not, then you have to kill the JAVA Process before starting the DAC server again.

MESSAGE:::Cannot run program "pmcmd": CreateProcess error=2, The system cannot find the file specified

When you attempt to register the Informatica servers in DAC and click the "Test" button, the test may fail with the below error.

SEVERE Thu Jan 11 18:52:41 EDT 2008

MESSAGE:::Cannot run program "pmcmd": CreateProcess error=2, The system cannot find the file specified

In order to troubleshoot this error, try to run the pmcmd and pmrep command from the command window and see if you can run those. If not, then it means that PATH environment is not set correctly.

ORA-01652: unable to extend temp segment by 32 in tablespace TEMP while running ETL

You may receive the error ORA-01652: unable to extend temp segment by 32 in tablespace TEMP while running the ETL, specially when tables are huge. If the SELECT statement is running against the tables which have 70-80 million rows, and the optimizer is doing the hash join, it will make use of TEMP and hash-area size to build the hash. If the TEMP tablespace is not big enough, then you may receive this error. There is no one value fit all the sizes, but in our case, with 100GB of space for TEMP tablespace resolve this error. You may have to set this to higher value if you have more than 100 million of rows in the tables.

Do not change the explain plan to make use of NESTED LOOP, because initially optimizer will take more space and time to build the hash but once the hash is built, query results will be much faster and it will perform better than NESTED_LOOP.

Saturday, May 17, 2008

Client Commands are Currently Not Being Accepted

If you receive the error "Client Commands are Currently Not Being Accepted" while performing some operation in the Essbase Database, it seems that some restructuring is happing for the Database. It is possible that some user has started some Calc script, which is forcing the restructure of database, then you cannot run any commands on the database until restructuring completes.

You can replicate this error with the following sequence of command.

Force the Essbase Database Restructure

MAXL> alter database Sample.Basic force restructure;

And then immediately run some command against the database like ESSCMD GETDBINFO or GETDBSTATS and you will see the followinng error.

localhost:Sample:basic:asadmin[1]->getdbinfo
GetDbInfo:


[Sat May 17 12:37:31 2008]localhost/Sample/basic/asadmin/Error(1013204)
Client Commands are Currently Not Being Accepted

Command Failed. Return Status = 1013204

Hyperion Essbase Database Validation

How to check if Essbase database really corrupted?

If the Essbase database really corrupted, then you will not be able to start it up. Database will not be loaded if it corrupted. If the database loaded with no problem, then you can force to restructure the database using the following MaxL command.

alter database Sample.Basic force restructure;

If the restructure complete successfully, then it indicate that database is not corrupted.

NQCMD and Cache

In one of my project, customer did not had the OBIEE Scheduler setup, so iBot was not an option to seed the cache. But still customer want to seed the cache and clear it after the ETL run finish.

NQCMD is one of the option to achive this. We used NQCMD utility to clear the cache and then seed it by running the logical queries for the Reports. For example

working_dir='/oraclebi/product/OracleBI'
cd $working_dir
cd setup
. ./user.sh
. ./common.sh
. ./sa-cli.sh
cd ..
nqcmd -d AnalyticsWeb -u Administrator -p SADMIN -s cmd.txt -o out.txt

In the cmd.txt, use the following.

{call SAPurgeAllCache()};
SELECT Time."Fiscal Quarter" saw_0, "Facts - AR Payments Due Aging"."AR Due1-30 Amount" saw_1, "Facts - AR Payments Due Aging"."AR Due 31-60 Amount" saw_2, "Facts - AR Payments Due Aging"."AR Due 61-90 Amount" saw_3, "Facts - AR Payments Due Aging"."AR Due 90+ Amount" saw_4, "Facts - AR Payments Due Aging"."Total AR Due Amount" saw_5, "Facts - AR Payments Due Aging"."AR Due 1-30 Amount to Total Due %" saw_6, "Facts - AR Payments Due Aging"."AR Due 31-60 Amount to Total Due %" saw_7, "Facts - AR Payments Due Aging"."AR Due 61-90 Amount to Total Due %" saw_8, "Facts - AR Payments Due Aging"."AR Due 90+ Amount to Total Due %" saw_9, "Facts - AR Payments Due Aging"."AR Due Amount to Total %" saw_10 FROM "Financials - AR Overview" WHERE ((Time."Fiscal Quarter" <= VALUEOF(CURRENT_FSCL_QUARTER)) AND (Time."Fiscal Quarter" > VALUEOF(FSCL_QTR_YEAR_AGO))) ORDER BY saw_0;
SELECT Time."Fiscal Year" FROM "Financials - AR Overview" WHERE Time."Fiscal Year" BETWEEN valueof(FILTER_FSCL_FROM_YEAR) AND valueof(FILTER_FSCL_TO_YEAR) ORDER BY 1 DESC;
SELECT Time."Fiscal Quarter" FROM "Financials - AR Overview" WHERE Time."Fiscal Year" BETWEEN valueof(FILTER_FSCL_FROM_YEAR) AND valueof(FILTER_FSCL_TO_YEAR) ORDER BY 1 DESC;

The cmd.txt first clear the cache and then seed it by running all the logical queries for each report customer has.

If you have BI on Unix Platform, you can setup this as Cron job.

Wednesday, May 14, 2008

SIL_GLRevenueFact_GL_Info_Update and SIL_ARTransactionFact_Gl_Info_Update

I have seen some performance issues with SIL_GLRevenueFact_GL_Info_Update and SIL_ARTransactionFact_Gl_Info_Update mapping. The issue is with both Full run and Incremental run. The throughput is really low, around 90-100 rows per second. We have notice this throughput when the target table W_GL_REVN_F has 20-30 million rows.

With 20-30 million rows in the target table, these mapping takes 48-72 hours for the full run.

The only workaround we found was to update the target table directly in the database and setting the DAC task as Completed. You can use MERGE statement in Oracle DB to update the target table, which is more optimized than UPDATE and INSERT combination. For example

MERGE INTO W_GL_REVN_F A USING W_GL_REVN_F_TMP B
ON (A.INTEGRATION_ID = B.INTEGRATION_ID AND A.DATASOURCE_NUM_ID = B.DATASOURCE_NUM_ID)
WHEN MATCHED THEN
UPDATE SET
A.ACCT_DOC_NUM = B.ACCT_DOC_NUM,
A.ACCT_DOC_ITEM = B.ACCT_DOC_ITEM,
A.ACCT_DOC_SUB_ITEM = B.ACCT_DOC_SUB_ITEM,
A.DOC_HEADER_TEXT = B.DOC_HEADER_TEXT,
A.LINE_ITEM_TEXT = B.LINE_ITEM_TEXT,
A.DOC_STATUS_WID = B.DOC_STATUS_WID,
A.POSTED_ON_DT_WID = B.POSTED_ON_DT_WID,
A.POSTED_ON_TM_WID = B.POSTED_ON_TM_WID,
A.GL_RECONCILED_ON_DT = B.GL_RECONCILED_ON_DT,
A.GL_RECONCILED_ON_PROC_WID = B.GL_RECONCILED_ON_PROC_WID,
A.DELETE_FLG = B.DELETE_FLG,
A.W_UPDATE_DT = B.W_UPDATE_DT,
A.ETL_PROC_WID = B.ETL_PROC_WID;


Here W_GL_REVN_F_TMP is the temp table, which was setup and it has the output of the original SIL_GLRevenueFact_GL_Info_Update mapping. This TEMP table was populated from the Source SQL of SIL_GLRevenueFact_GL_Info_Update.

Same solution was used for SIL_ARTransactionFact_Gl_Info_Update.

We used this workaround only for Full ETL. Incremental run is also slow for these 2 mappin but sine the insert/update are not that much, they are executing in our expeted duration of maintenance window.

Cannot transition the state of database [XXX_XXX] between read-write and read-only modes while other users are connected to the database

We have a backup script, which fire every night and make use of ESSCMD commands to take the Essbase File system backup. This script

- Put all the database in read-only mode using BEGINARCHIVE command
- Take the file system backup
- Put all the database in read-write mode using ENDARCHIVE command

We have notice sometime the below error.

[Mon Dec 11 18:18:00 2006]Local/ESSBASE0///Error(1013239)
Cannot transition the state of database [XXX_XXX] between read-write and read-only modes while other users are connected to the database.

This error was due to some Calulation script, end user was running at the time of the backup. Due to that calcualtion script, BEGINARCHIVE was not able to put the database into read-only mode.

I think if you use the LOGOUTALLUSERS command just before the BEGINARCHIVE, it should logout all the users from the system, including the current Calc script running. It should suspend all the operation, users might be running at that point. Though we are educating our users not to run any database restructuring job in the scheduled backup window.

Sunday, May 11, 2008

OBIEE and PeopleSoft

Oracle Business Intelligence (OBIEE) provides out of the box RPD and Webcatalog for Peoplesoft EPM Warehouse. You can download Fusion Intelligence for Peoplesoft 9.0 or 8.9 software from the eDelivery site and this will give you out of the box RPD and Webcatalog. Just deploy this RPD and Webcatalog and change the DB connection in RPD to Peoplesoft EPM Warehouse and you are all set. No need to worry about DAC, Informatica as EPM Warehouse is using the PeopleSoft technology to populate it. You are just using OBIEE as a presentation layer against Peoplesoft EPM Warehouse. It also has the in-built data security.

Fusion Intelligence for Peoplesoft solution is applicable for those customers who already have the EPM Warehouse and just need the OBIEE for Presentation Layer.

Saturday, May 10, 2008

OBIEE and OBIEE Plus

What is the difference between OBIEE and OBIEE Plus?

OBIEE Plus has 4 additional components and they are Hyperion Reporting components.
- Financial Reporting
- SQR Production Reporting
- Interative Reporting
- Hyperion Web Analysis

Server queue full. Returning busy response to new connection

I have seen this error in the SAW Server log and when it happen, SAW server hang-up. When you launch the Analytics URL, Login page does not come up. The only way to bring back the service is to re-start the SAW Server log. The exact error in the SAW server log file is

Type: Warning, Severity: 50
Time: Xxx Xxx 99 99:99:99 2008
File: project/webcomm/socketserver.cpp, Line: 430
Properties:

Location:
saw.rpc.server.socketServer
Server queue full. Returning busy response to new connection


I was able to resolve this issue by tuning the reports. I have notice that some of the reports in the dashboards were not performing well and they were spinning and spinning. Please note that if the report is not coming back and spinning, it occupy the thread of the SAW server and when more and more users access the same reports, it consumes SAW thread and then eventually SAW cannot spawn any more threads to fulfil new request. Either you have to wait until thread become available, means Query return the results or just re-start the SAW Server.

You have to make sure that none of the reports in the dashboards or any adhoc reports taking more time for their execution. Make sure you are using the SAS Server caching so that query does not go to database to fetch the records and results become available from the cache. This way, SAW thread will not be occupied for more time and become available quickly to fulfil new request.

Monday, May 5, 2008

Error Generating Chart

When accessing Dashboards/Reports in Oracle Business Intelligence, you may see the below error.

Error Generating Chart
An error occurred during execution of "connect". Connection refused [Socket:30]
Error Details
Error Codes: ETI2U8FA

The cause of this error is that either the JavaHost process for Chart Server not running, or you have change the JavaHost Port number but forgot to update the instanceconfig.xml file with the new port.

Please note that if you change the JavaHost port number (default 9810) in config.xml file of JavaHost, you also need to update the instanceconfig.xml file of Analytics Web Server with the new port. The new name-value pair you need to add is as follows.

<JavaHostProxy>
<Hosts><Host address="myhost.domain.com" port="newport>" /></Hosts>
</JavaHostProxy>

Sunday, May 4, 2008

Change Chart Server Port for Business Intelligence

If you need to change the Chart Server (JavaHost) port for Oracle Business Intelligence, you need to modify the config.xml file. This file is located in /OracleBI/web/javahost/config folder. Default value is 9810.

<port>9810</port>

If you change the JavaHost port in config.xml file, then you also need to change instanceconfig.xml file so that Analytics web Server know which port the chart server is listening on. Following section in the instanceconfig file explain that JavaHost process running on host myhost.domain.com on port 10053.

<javahostproxy><hosts><host address="myhost.domain.com" port="10053"/></hosts></javahostproxy>

Please note that this parameter specific to OBIEE 10.1.3.2 on Linux and may got changed in earlier or later releases.

Database Session disappear while running Informaitca ETL

Sometime when the Source SQL statement perform very poorly and taking long time, you might experience that Oracle Database Session disappear when querying for v$session table. This usually happen in the Firewall environments.

When there is Firewall between the Informatica and Source Database, you need to make sure that database server is sending probe packets to the client in order for Firewall not to kill the connections due to idle time. Check the expire_time parameter in the SQLNet.ora file on the Database server. Oracle recommended value is 10 min. Default value is 0. When the value of expire_time parameter is set to 10 min, it means that Database server will send the probe packets every 1o min, and Firewall will not kill the client connections in case SQL queries are running very slow and taking hours just to prepare the SQL execution.

If even setting the expire_time in the SQLnet.ora file causing DB session to disappear, check the tcp_keeplive_time parameter on the client machine. Default value for this parameter is 2 hours (7200 seconds) on Linux and Solaris.

$ cat /proc/sys/net/ipv4/tcp_keepalive_time

Set this parameter to less than the firewall timeout value. If the firewall timeout value is 30 min, below statement set tcp_keepalive_time to 15 min on Linux. Use the ndd -set on Solaris to set the same parameter.

$echo 900 > /proc/sys/net/ipv4/tcp_keepalive_time

Optimize Informatica ETL Incremental Mappings

Some of the ETL mapping for Incremental run with EBSO as Source database really runs very slow. The SELECT statement against the source database itself is very slow and with huge database, rows fetching started in 8-10 hours.

Based on our experience we have seen that Incremental SQL queries use the OR operator. Oracle Optimizer perform much better with UNION operator than OR.

In order to optimize Incremental ETL queries against the huge database, test the queries with UNION operator instead of OR.

If UNION also does not perform well, test UNION ALL. UNION ALL will pull the duplicates but you can always remove the duplicates from staging table after the ETL run is done. An example of the SQL to delete the duplicate rows are


DELETE FROM W_SALES_INVOICE_LINE_FS A WHERE ROWID > (SELECT MIN(ROWID) FROM W_SALES_INVOICE_LINE_FS B WHERE A.INTEGRATION_ID = B.INTEGRATION_ID AND A.DATASOURCE_NUM_ID = B.DATASOURCE_NUM_ID)

Tuesday, April 15, 2008

Can we install DAC client and DAC Server without installing Analytics application?

Sometime customer do not want to install the 7.9.x application but just need the DAC client or DAC Server. In order to setup the DAC client or DAC Server, you do not have to run the Analytics application installer. That is the beauty of DAC. If you have the DAC client or DAC Server setup on one machine, just zipped up the DAC folder and then unzip it on the target machine. After the unzip, if the directory structure is different on source and target, modify the DAC config files config.bat/config.sh and dac_env.bat/dac_env.sh and update the path.

Since DAC client/DAC server a JAVA based application and platform independent, there is no separate DAC server installer available on Unix platform. You need to setup the DAC on the Windows machine and the move the DAC folder to Unix machine in order to run the DAC Server on Unix. After you move the DAC folder to Unix, please make sure to use dos2unix utility on all the .sh files to remove the carriage return/line feed characters. If you do not run this utility and try to source the environement variables using the .sh file, it will not be set correctly.

DAC Task synchronize results in Error

When you try to synchronize the task in DAC, it might results in the following error.

MESSAGE:::Failed to create folder/workflow/session cache fileEXCEPTION CLASS::: com.siebel.analytics.etl.infa.interaction.PmrepUtilsException
com.siebel.analytics.etl.infa.interaction.PmrepUtils.exportListWorkflow(PmrepUtils.java:659)com.siebel.analytics.etl.infa.fileParsing.TaskSync.sync(TaskSync.java:147)com.siebel.analytics.etl.client.action.TaskSynchronizationAction.doOperation(TaskSynchronizationAction.java:123)com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
::: CAUSE :::EXCEPTION CLASS::: java.lang.NullPointerException
com.siebel.analytics.etl.infa.interaction.PmrepUtils.getRelatedObjects(PmrepUtils.java:520)com.siebel.analytics.etl.infa.interaction.PmrepUtils.getWorkflowObjects(PmrepUtils.java:490)com.siebel.analytics.etl.infa.interaction.PmrepUtils.readSessions(PmrepUtils.java:447)com.siebel.analytics.etl.infa.interaction.PmrepUtils.exportListWorkflow(PmrepUtils.java:650)com.siebel.analytics.etl.infa.fileParsing.TaskSync.sync(TaskSync.java:147)com.siebel.analytics.etl.client.action.TaskSynchronizationAction.doOperation(TaskSynchronizationAction.java:123)com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)

This might be because of the DAC client Windows machine setup. DAC client will use the pmrep.exe utility to connect to the Informatica repository server but if the PATH environment variable is not pointing to the Informatica client installation on the Windows machine, DAC task synchronize will results in the above error.

Please add the Informatica client path to PATH environment variable, and re-start the DAC client and then try to synchronize the task again.

Saturday, April 12, 2008

DBG_21045 ERROR: Can't create files in the directory

While running the ETL, you may encounter the following error

TRANSF_1_1_1> PMF_15004 Unable to open file.
PMF_15004 [/oraclebi/infrmatica/714/server/Cache/PMLKUP9442_262173.dat]
TRANSF_1_1_1> DBG_21045 ERROR: Can't create files in the directory "/oraclebi/infrmatica/714/server/Cache"

TRANSF_1_1_1> DBG_21052 Check file system permission and free space for possible failure

This error usually occur when the Informatica try to create a huge Cache file but no space available on the mount point. Check the space available using the df -h command when this error occur and add more disk space to the mount point in question.

ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl]

When running the ETL job, you may encounter the following error

Failed : CREATE BITMAP INDEX
W_SLS_CC_LN_F_F34
ON
W_SALES_CYCLE_LINE_F
(
PROFIT_CENTER_WID ASC
)
NOLOGGING PARALLEL TABLESPACE OLAP_REP_INDEX With error message : java.sql.SQLException: ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl], [8863], [6], [0], [], [], [], []

After 10 attempts.

This error will occur when creating index with parallel option on huge tables. If the table size is small to medium, this error does not error while creating the index with parallel option.

There are two ways to resolve this issue. Either create the index serially, which might take lot of time or fix the Oracle DB. There is a bug in the Oracle DB (to be more specific 10.2.0.3) and a patch is available for this. Apply the patch 4695511 to fix this issue.

DAC Schedule impacted by Daylight Saving time (DST)

DAC Schedule may be impacted by the Daylight Saving Time (DST) if the JRE used is not DST patched. DAC Server uses the JRE and if the JRE used does not have the DST fix, DAC schedule will be impacted when the time change.

You need to check if the JRE used has the DST fix. Sun Microsystem provide the Timezone updater tool (tzupdater.jar) to update the Timezone data in JDK/JRE installed on the Server. More information regarding the Timezone updater tool can be found at http://java.sun.com/javase/tzupdater_README.html. Download the Timezone updater from http://java.sun.com/javase/downloads/index.jsp#timezone and run it to update the timezone data.

The usage of the TZUpdater tool is as follows

java -jar tzupdater.jar options

Where options are

-t - Run verification test. Use this option to test if the JDK/JRE is up to date with the Timezone data

-u - Update the timezone data. If the verification failed when option -t used, use -u to update the JDK/JRE.

Start the DAC Server on Unix in background

There is startserver.sh script available in /DAC folder to start the DAC Server process. But if you just start the DAC Server with startserver.sh script, DAC Server process will die after the user who start it, close the SSH session. You need to start the DAC Server process using nohup so that SIGHUP signal can be ignored and start it in the background.

$pwd
/oraclebi/product/OracleBI/DAC
$nohup ./startserver.sh >> dacserver.log 2>&1 &

Change Ports for SAW, SAS and Scheduler component

Sometime there is requirement to change the default ports for Oracle Analytics Web server, Analytics Server and Scheduler component. Some company even has a security policy not to use the default ports for the application.

In order to change the default port for Oracle Analytics Server (SAS), open the NQSConfig.INI file from /oraclebi/product/OracleBI/server/Config in vi or any other editor. Change the value of RPC_SERVICE_OR_PORT parameter. Default value is 9703. For example

RPC_SERVICE_OR_PORT = 10073;

Once you change the SAS Server port, make sure to modify the AnalyticWeb ODBC connection in odbc.ini file in /oraclebi/product/OracleBI/setup folder. This ODBC connection is used by SAW Server so it also need to be changed to reflect the correct port.

[AnalyticsWeb]
Driver=/oraclebi/product/OracleBI/server/Bin/libnqsodbc.so
Description=Oracle BI Server
ServerMachine=local
Repository=
Catalog=
UID=
PWD=
Port=10073


In order to change the default port for Oracle Analytics Web Server (SAW), open the instanceconfig.xml file from /oraclebi/product/OracleBIData/web/config in vi or any other editor. Add the name-value pair Listener, attribute port within ServerInstance tag. Default value is 9710. For example

<Listener port="10076"/>

Once you change the SAW Server port, also make sure to change the configuration file of SAW Plug-in. SAW plug-in need to know where the SAW Server is running. In case of Oracle Application J2EE Server, update the web.xml file of the Analytics application in the ias/product/asmt_10131/j2ee/bianalytics/applications/analytics/analytics/WEB-INF folder.

<servlet>
<servlet-name>SAWBridge</servlet-name>
<servlet-class>com.siebel.analytics.web.SAWBridge</servlet-class>
<init-param>
<param-name>oracle.bi.presentation.sawserver.Host</param-name>
<param-value>localhost</param-value>
</init-param>
<init-param>
<param-name>oracle.bi.presentation.sawserver.Port</param-name>
<param-value>10076</param-value>
</init-param>
</servlet>


In order to change the default port for Oracle Analytics Scheduler component, you need to configure the Scheduler configuration using schconfig utility on Unix platform or via Job Manager on Windows. Launch the schconfig utility

$pwd
/oraclebi/product/OracleBI/setup
$. ./user.sh
$. ./sa-init.sh
$schconfig

Select Option 1- Configure Scheduler and then Option 2-General. Change the port number in option 4. Default port is 9705.

Friday, April 11, 2008

Setup OBIEE Scheduler on Linux

Several of my friends ask me how to setup the OBIEE scheduler on Linux. Here are the detailed steps of setting up the OBIEE scheduler on Linux in 10.1.3.3.2. Please note that this steps may be different in earlier version of Analytics (7.8.4) as introduction of cryptotools.

These are the steps to setup the Scheduler



  1. Setup the DB schema for Scheduler that is used to store the runtime data


  2. Configure the scheduler on Linux


  3. Configure the presentation service in order for it to connect to Scheduler



  4. Start the Scheduler Service



1. Setup the DB schema for Scheduler that is used to store the runtime data


The very first step in setting up the scheduler is to setup the DB schema for scheduler. The DB tables are in the SQL script
$INSTALLDIR/OracleBI/server/Schema/SAJOBS.Oracle.sql. Please select the DB platform specific .sql file. This blog post assume Oracle Database as your DB platform.



SQL>create user S_NQ_SCHED identified by S_NQ_SCHED;
SQL>grant connect,resource to S_NQ_SCHED;
SQL>alter user S_NQ_SCHED default tablespace OLAP;
SQL>alter user S_NQ_SCHED temporary tablespace TEMP;
SQL>alter user S_NQ_SCHED quota unlimited on OLAP;
SQL>connect S_NQ_SCHED/S_NQ_SCHED;
SQL>@$INSTALLDIR/OracleBI/server/Schema/SAJOBS.Oracle.sql;


This will setup scheduler tables S_NQ_JOB, S_NQ_JOB_PARAM, S_NQ_INSTANCE and S_NQ_ERR_MSG under schema S_NQ_SCHED.



2. Configure the Scheduler on Linux


-bash-3.00$ pwd
/oraclebi/product/OracleBI/setup
-bash-3.00$ . ./user.sh
-bash-3.00$ . ./sa-init.sh
-bash-3.00$ schconfig





Select Option 1 and then Enter



Select Option 1 and then Enter



Select Option 3, Enter Data Source Name. This should match the data source in tnsnames.ora file on the Scheduler Server. Select Option 4, Enter S_NQ_SCHED and Select Option 5, Enter S_NQ_SCHED. Select 0 to Quit and Save the changes.



Select Option 2 - General. Specify the Scheduler Administrator user and password. It can be same as BI Server Administrator user or you can setup a new user in RPD with the Administrator group assigned to it. Select the option 4 if you want to use a different port for scheduler rather then default port of 9705. Select 0 to quit and save the changes. Again select 0 to Quit and come back to the main menu.



Select option 2 to configure the email. Select 1- General to specify the email SMTP Server details. Select option 3 and 4 to specify the SMTP host name and port. On Linux, If the SendMail application running, you can use same host name as SMTP Server and port as 25. If the SendMail or the SMTP Server you are using is configured with Authentication, select Option 8 - Authenticate against the SMTP Server and set it to TRUE and specify the Username and Password that Scheduler will use to authenticate against the SMTP Server. Select 0-Quit to come back to the main menu, and save the changes.



Select Option 3 to configure the scheduler to talk to Presentation server. Select Option 1 - SAW Machine Name if the SAW Server not running on the same host as Scheduler and using a non-default port.

Select 0-Quit to exit from the Scheduler Configuration and Save the changes.



3. Configure the presentation service in order for it to connect to Scheduler


Now you need to configure the Presentation Server to talk to Scheduler Server.



  • Open the instanceconfig.xml file from oraclebi/product/OracleBIData/web/config.

  • Check the Alerts tag and make sure it is pointing to the correct host and port for the Scheduler Server.

<alerts><scheduleserver>myhost.company.com:xxxx</scheduleserver></alerts>



  • Make sure that CrentialStore tag point to the credentialstore.xml file.

<credentialstore><credentialstorage type="file" path="/oraclebi/product/OracleBIData/web/config/credentialstore.xml" passphrase="xxxxx"></credentialstore>



  • Run the cryptotools utility to update the credentialstore.xml file


Bash$ . /oraclebi/product/OracleBI/setup/sysenvinit.sh
Bash$ cd /oraclebi/product/OracleBI/web/bin
Bash$ cryptotools credstore -add –infile /oraclebi/product/OracleBIData/web/config/credentialstore.xml



Credential Alias: admin
Username: Administrator
Password: xxxxxxx
Do you want to encrypt the password? y/n (y): y
Passphrase for encryption: xxxxxx
Do you want to write the passphrase to the xml? y/n (n): y


Please make sure that credentialstore.xml file is updated with the following section at the end of the file.


<sawcs:credential type="usernamePassword" alias="admin"> <sawcs:username>Administrator</sawcs:username> <sawcs:password passphrase="xxxxxx"> <xenc:encrypteddata> <xenc:encryptionmethod algorithm=" href=">http://www.rsasecurity.com/rsalabs/pkcs/schemas/pkcs-5#pbes2"> <pkcs-5:pbes2-params algorithm=" href="http://www.rsasecurity.com/rsalabs/pkcs/schemas/pkcs-5#pbkdf2"> <pkcs-5:keyderivationfunc> <pkcs-5:parameters> <pkcs-5:iterationcount>1024</pkcs-5:IterationCount> </pkcs-5:Parameters> </pkcs-5:KeyDerivationFunc> <pkcs-5:encryptionscheme algorithm=" href=">http://www.w3.org/2001/04/xmlenc#tripledes-cbc"/> </pkcs-5:PBES2-params> </xenc:EncryptionMethod> <xenc:cipherdata> <xenc:ciphervalue>3xUah7l969vR8NepkxxN4Q</xenc:CipherValue> </xenc:CipherData> </xenc:EncryptedData> </sawcs:password> </sawcs:credential>


passpharase should match with what you have specified while running the cryptotools utility. Please note that Credential Alias should be "admin" in order for Presentation Server talk to Scheduler. If you specify any other alias, presentation server will not be able to authenticate.


4. Start the Scheduler Service


Start the OBIEE Scheduler Component and also re-start the SAW Server if it is already running. If you change instanceconfig.xml file, always re-start the SAW server.



bash$ cd /oraclebi/product/OracleBI/setup
bash$ . ./user.sh
bash$ . ./sa-init.sh
bash$ ./run-sch.sh start


Please make sure Scheduler process started up.


bash-3.00$ ps -fe|grep sch
13140 1 0 Mar24 ? 00:00:22 /oraclebi/product/OracleBI/server/Bin/nqscheduler
bash-3.00$


You can also check the port of the scheduler to make sure it is running.


bash-3.00$ netstat -an|grep 9705
tcp 0 0 0.0.0.0:9705 0.0.0.0:* LISTEN
bash-3.00$