Critical performance problem upon bulk load of groups

All (including product development),
I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
Running SQL trace points in the directions of the following SQL statement:
SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
WWPOB_PAGE$ WHERE ID = :b1
I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
"GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
Also note: In the call to addGroupToList, I set owner to true for all groups.
Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
Thanks,
Erik Hagen (you may call me on +47 90631013)
null

YES!
I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
============================================
CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
ON PORTAL30.WWSEC_PERSON$('ID')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
ON PORTAL30.WWSEC_PERSON$('USER_NAME')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
"SPONSORING_MEMBER_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
ON PORTAL30.WWSEC_FLAT$("ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
"NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
"GRANTEE_USER_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
==================================
Thanks,
Erik Hagen
null

Similar Messages

  • Performance Problem with Query load

    Hello,
    after the upgrade to SPS 23, we have some problems with loading a Query. Before the Upgrade, the Query runs 1-3 minutes, and now more then 40 minutes.
    Does anyone have an idea?
    Regards
    Marco

    Hi,
    Suggest executing the Query in RSRT transaction by choosing the option ' Execute+Debugger' to further analyze where exactly the query is taking time.
    Make sure choosing the appropriate 'Query Display' (List/BEx Analyzer/ HTML) option before Executing the query in the Debugger mode since the display option also effect the query run time.
    Hope this info helps!
    Bala Koppuravuri

  • Performance problems on bulk data importing

    Hello,
    We are importing 3.500.000 customers and 9.000.000 sales orders from an
    external system to CRM system initialy. We developed abap programs
    those use standart "bapi" functions to import bulk data.
    We have seen that this process will take a lot of time to finish
    approximately in 1,5 months. It is a very long time for us to wait it
    to be finished. We want to complete this job about in a week.
    Have we done something wrong? For example, are there another fast and
    sap standard ways to import bulk partners and sales orders without
    developing abap programs?
    best regards,
    Cuneyt Tektas

    Hi Cuneyt,
    SAP standard supports import from external source. You can use XIF adapter or you can also use ECATT.
    Thanks,
    Vikash.

  • Performance problem at bulk insert with spatial index

    Hi,
    I have a table with SDO_GEOMETRY.
    Insert without spatial index is very fast, but with active spatial index it's very slow.
    So for the first big import of data, I can drop the index, import the data and again create the index. Thats 10 times faster!
    But for an already very big table that is no option.
    The 10g1-Users Guide (1) says at 4.1.3 that the spatial index should be set to 'deferred', the data should be inserted and than the index should be synchronized again. That sounds very good, but I can't find this at the 11g1-Users Guide.
    I tried it (11g1), but the performance is even worse than with active index!
    What could be my mistake? Any hints?
    Thank you,
    Bjoern Weitzig
    create table sggeoptcollection (pt SDO_GEOMETRY);
    CREATE INDEX myIdx ON sggeoptcollection (pt) INDEXTYPE IS MDSYS.SPATIAL_INDEX PARAMETERS('sdo_indx_dims=2, layer_gtype=point, sdo_rtr_pctfree=50');
    ALTER INDEX myIdx PARAMETERS ('index_status=deferred');
    Big import with batch'ed PreparedStatements
    ALTER INDEX myIdx PARAMETERS ('index_status=synchronize sdo_batch_size=500');
    1) http://download.oracle.com/docs/html/B10826_01/sdo_index_query.htm#g1010227

    Hi,
    I have a table with SDO_GEOMETRY.
    Insert without spatial index is very fast, but with active spatial index it's very slow.
    So for the first big import of data, I can drop the index, import the data and again create the index. Thats 10 times faster!
    But for an already very big table that is no option.
    The 10g1-Users Guide (1) says at 4.1.3 that the spatial index should be set to 'deferred', the data should be inserted and than the index should be synchronized again. That sounds very good, but I can't find this at the 11g1-Users Guide.
    I tried it (11g1), but the performance is even worse than with active index!
    What could be my mistake? Any hints?
    Thank you,
    Bjoern Weitzig
    create table sggeoptcollection (pt SDO_GEOMETRY);
    CREATE INDEX myIdx ON sggeoptcollection (pt) INDEXTYPE IS MDSYS.SPATIAL_INDEX PARAMETERS('sdo_indx_dims=2, layer_gtype=point, sdo_rtr_pctfree=50');
    ALTER INDEX myIdx PARAMETERS ('index_status=deferred');
    Big import with batch'ed PreparedStatements
    ALTER INDEX myIdx PARAMETERS ('index_status=synchronize sdo_batch_size=500');
    1) http://download.oracle.com/docs/html/B10826_01/sdo_index_query.htm#g1010227

  • Bulk Loading with Availability Group

    We have a critical database that is bulk loaded every 30 minutes.  The database is currently in simple recovery model for the obvious reason of keeping the transaction log manageable.  I would like to add it to our SQL Server 2012 availability
    group so the database is always available.  It has to be set to full for the recovery model.  Is there a good way to keep the transaction log from getting unwieldy without doing backups as often as the loads take place?  The database is a little
    over a GB and will only grow about 1-2% a month.
    Thor

    If your load is high, then you can plan like if it works, if the db size is small plan full backup daily durning non-load hours,but log backup you need to schedule the log backup frequently inorder to reduce the log usuage.
    if the database size huge, plan daily differential _weekly once full backup+frequent logbackups(everyday -you need to choose the timings).
    Thanks, Rama Udaya.K (http://rama38udaya.wordpress.com) ---------------------------------------- Please remember to mark the replies as answers if they help and UN-mark them if they provide no help,Vote if they gives you information.

  • Oracle performance problem -

    We are facing some critical performance problems with one of the
    tables. I have a table with the schema given below.When there
    are around 2 lakh entries in this table, if I give a select count
    (*) , it takes around 20 seconds to return the count. Where as
    any other table in the same database, returns the count in less
    than 2 seconds for around 1 lakh entries. I am not able to
    figure out where the performance bottle neck is.
    Configuration:
    Oracle 8.0.5 on Windows NT 4.0
    USER_NAME NOT NULL VARCHAR2(30)
    TASK_ID NUMBER // Unique
    TASK_NAME NOT NULL VARCHAR2(100)
    TERMINAL_NAME VARCHAR2(30)
    TASK_DATE DATE
    TASK_STATUS NOT NULL NUMBER
    PARENT_ID NUMBER
    NE_NAME NOT NULL VARCHAR2(30)
    ENM_JOBID VARCHAR2(30)
    CMD_CONTENT VARCHAR2(4000)
    CMD_RESPONSE VARCHAR2(4000)
    RESPONSE_TIME DATE
    TASK_TYPE NUMBER
    SCHEDULE_TIME DATE
    All the tables are present in the same table space. So
    I guess it could not be because of any disk access differences
    between tables. Have you guys faced any such problems ? Could
    this be because of the huge CMD_RESPONSE and CMD_CONTENT
    fields ?

    Thanks a lot.
    I would like to know , how I can optimise the performance when
    having longer records. Will increasing the block size help in
    getting better performance ? If so,is there any other
    disadvantage in increasing the block size.
    My actual requirement is to read all the records in the
    table. So, a static cursor is created on the server and I do a
    block fetch(SQLFetchScrolll). I use ODBC for this purpose. The
    first fetch takes a lot of time, around 50 seconds or so for 2
    lakh records. Any ideas how to optimise the same ???
    Best Regards,
    Vignesh

  • Performance problem with Function Group

    Hi All,
    when we call a FM .. all the function modules which are present in that function group will be copied into the Main memory .
    so how to solve the performance problems that occur..
    Regards,
    Sravan.

    you should analyse your problem more carefully before asking:
    + you can have performance problems related to the execution of function modules.
    + you can also have memory related performance problems by loading huge function groups (non standard).
    The fix for the second are smaller and welldefined function groups.
    The solutions for the second are constantly discussed here.
    Siegfried

  • Performance problem in loading the Mater data attributes 0Equipment_attr

    Hi Experts,
    We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
    when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
    we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
    can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
    Thanks,
    dp

    Hi,
    check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    Regards
    Andreas

  • Numbers Import and Load Performance Problems

    Some initial results of converting a single 1.9MB Excel spreadsheet to Numbers:
    _Results using Numbers v1.0_
    Import 1.9MB Excel spreadsheet into Numbers: 7 minutes 3.5 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 11.7 seconds
    _Results using Numbers v1.0.1_
    Import 1.9MB Excel spreadsheet into Numbers: 6 minutes 36.1 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 5.8 seconds
    _Comparison to Excel_
    Excel loads the original 1.9MB spreadsheet in 4.2 seconds.
    Summary
    Numbers v1.0 and v1.0.1 exhibit severe performance problems with loading (of it's own files) and importing of Excel V.x files.

    Hello
    It seems that you missed a detail.
    When a Numbers document is 1.9MB on disk, it may be a 7 or 8 MB file to load.
    A Numbers document s not a file but a package which is a disguised folder.
    The document itself is described in an WML extremely verbose file stored in a gzip archive.
    Opening such a document starts with an unpack sequence which is a fast one (except maybe if the space available on the support is short).
    The unpacked file may easily be 10 times larger than the packed one.
    Just an example, the xml.gz file containing the report of my bank operations for 2007 is a 300Kb one but the expanded one, the one which Numers must read, is a 4 MB one, yes 13,3 times the original.
    And, loading it is not sufficient, this huge file must be "interpreted" to build the display.
    As it is very long, Apple treats it as the TRUE description of the document and so, each time it must display something, it must work as the interpreters that old users like me knew when they used the Basic available in Apple // machines.
    Addind a supplemetary stage would have add time to the opening sequence but would have fasten the usage of the document.
    Of course, it would also had added a supplementary stage duringthe save it process.
    I hope that they will adopt this scheme but of course I don't know if they will do that.
    Of course, the problem is quite the same when we import a document from Excel or from AppleWorks.
    The app reads the original which is stored in a compact shape then it deciphers it to create the XML code. Optimisation would perhaps reduce a bit these tasks but it will continue to be a time consuming one.
    Yvan KOENIG (from FRANCE dimanche 27 janvier 2008 16:46:12)

  • Golden Gate Initial Load - Performance Problem

    Hello,
      I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
    BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
    Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
    Why does it take so long using Golden Gate? Am I missing something?
    I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
    Regards
    Pawel

    Hi Bobby,
    It's Extract / Replicat using SQL Loader.
    Created with following commands
    ADD EXTRACT initial-load_Extract, SOURCEISTABLE
    ADD REPLICAT initial-load_Replicat, SPECIALRUN
    The Extract parameter file:
    USERIDALIAS {:GGEXTADM}
    RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
    RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
    TABLE Schema.Table_name;
    The Replicat parameter file:
    REPLICAT {:REP_INIT_NAME}_0
    SETENV (ORACLE_SID='{:REPLICAT_SID}')
    USERIDALIAS {:GGREPADM}
    BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
    ASSUMETARGETDEFS
    MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
    COLMAP(USEDEFAULTS),
    KEYCOLS(PKEY),
    INSERTAPPEND;
    Regards,
    Pawel

  • How to improve performance for Azure Table Storage bulk loads

    Hello all,
    Would appreciate your help as we are facing a challenge.
    We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
    We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
    We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
    Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
    Kindly, note that we shouldn't be using SQL/Azure SQL for this.
    I would really appreciate your help.
    Thanks

    I'd think you're just pooling the parallel connections to Azure, if you do it on one system.  You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
    You could speed it up by moving the data file to the cloud and process it with a Cloud worker role.  That way you'd be in the datacenter (which is a much faster, more optimized network.)
    Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
    Darin R.

  • Unable to perform bulk load in BODS 3.2

    Hi
    We have upgraded our Development server from BODS 3.0 to BODS 3.2. There is a dataflow wherein the job uses the Bulk load option. The job is giving warnings at that dataflow and all the data is shown as warnings in the log. No data is loaded to the Target table. We have recently migrated SQL Server 2005 to SQL Server 2008. Will someone let me know why the Bulk load option is not working in BODS 3.2
    Kind Regards,
    Mahesh

    Hi,
    I want to upgrade SQL Server 2005 to SQL server 2008 with BODS 4.0.
    I want to know the recommandations for do it.
    - How to use SQL Server 2008 with Bods?
    - What are the performece on SQL server 2008?
    - What are the things to evaluate?
    - Is it necessary migrate with BackUp restore mode ?
    - What are the step of migration?
    - Can we merge the disabled in BODS?

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • How to avoid performance problems in PL/SQL?

    How to avoid performance problems in PL/SQL?
    As per my knowledge, below some points to avoid performance proble in PL/SQL.
    Is there other point to avoid performance problems?
    1. Use FORALL instead of FOR, and use BULK COLLECT to avoid looping many times.
    2. EXECUTE IMMEDIATE is faster than DBMS_SQL
    3. Use NOCOPY for OUT and IN OUT if the original value need not be retained. Overhead of keeping a copy of OUT is avoided.

    Susil Kumar Nagarajan wrote:
    1. Group no of functions or procedures into a PACKAGEPutting related functions and procedures into packages is useful from a code organization standpoint. It has nothing whatsoever to do with performance.
    2. Good to use collections in place of cursors that do DML operations on large set of recordsBut using SQL is more efficient than using PL/SQL with bulk collects.
    4. Optimize SQL statements if they need to
    -> Avoid using IN, NOT IN conditions or those cause full table scans in queriesThat is not true.
    -> See to queries they use Indexes properly , sometimes Leading index column is missed out that cause performance overheadAssuming "properly" implies that it is entirely possible that a table scan is more efficient than using an index.
    5. use Oracle HINTS if query can't be further tuned and hints can considerably help youHints should be used only as a last resort. It is almost certainly the case that if you can use a hint that forces a particular plan to improve performance that there is some problem in the underlying statistics that should be fixed in order to resolve issues with many queries rather than just the one you're looking at.
    Justin

  • Performance problem counting ocurrences

    Hi,
    I have an infocube with 5 characteristics (region,company,distribution center, route, customer) and 3 key figures, I have set one of this KF to average ( values different to 0), i am loading data from 16 months and 70 weeks. In my query i have set a calculated KF which is counting the ocurrences by the lowest characateristic to obtain it by granularity level therefore I always count the lowest detail (customer) there are aprox, 500K customers so my web templates are taking more than 10 minutes displaying the 12 months, I have looked up to make aggregations however the query is not using them anyway, has anyone had this kind of performance problems with such a low level of data (6 million for 12 months), Has anyone found a workaround to improve performance? I really expect someone has this experience and could help me out, this will depend on the life of BW in the organzation.
    Please help me out!
    Thanks in advance!

    Hi,
    First of all thanks for your advices, I have taken part of both in my solution, I am now not considering anymore to use the avg defined in the ratio, how ever i am still considering  it in the query, it is answering at least for now taking up to 10 mins. Now my exact requirement is to display the count of distinct customers groped by the upper levels. I have populated my infocube with 1 in my key figure however, it may be duplicated for a distribution center, company or region, therefore i have to find out the distinct customer. With SAP's "How to count occurences" i managed that, but it is not performing at an acceptable level , i have performed tests without the division between CKF customer/ CKF avg customer and found this is what is now slowing the query. I find the boolean evaluation might be more useful and less costly if you could hint a little more in how to do it, i would appreciate with points, also a change in the model could be costly by the front end part because of dependences with queries and web templates, i rather have it solved in BW workbench by partitioning, aggregation, new infocubes,  which is already a solution I have analyzed by disggregating the characteristics by totals in different infocubes with the same KF and then by query selecting the appropiate one. I was wondering if an initial routine could do the count distincts and group by with the same ratio for different characteristics so i do not rework the other configuration I already have

Maybe you are looking for

  • MSI GT70 2PC Dominator (happens to similar msi's too) WiFi Spikes/in-game lags and freezes IMPORTANT

    Ok so I've been searching this all around, it is a common issue. I have tried many fix's but if anyone has an idea please suggest. If anyone has had this issue and now it's fixed please tell me. It is clearly the wifi hardware or software from MSI. I

  • How to get back my Canvas and Timeline?

    I closed the canvas and timeline and then saved the project. Now i don't know how to get them back.

  • What is Live verification in disk utility?

    When i try to verify my ssd in disk utility, The following pop up appears Your computer may be slow or unresponsive while the startup disk is being verified. When i click verify disk It then says performing live verification in live mode. What is liv

  • Product graphic for used input

    Hi, I am looking for suggestions/direction on the following. We test cards, any amount 1 to 8, which are placed into slots within a single test chassis. We need to get information of each card in turn in the chassis. What we would like to do is to di

  • How to launch Photoshop from Adobe manager

    Photoshop CS6 is shown as installed in Adobe manager, but there is no launch button to open the program.  I click on Photoshop CS6, but nothing happens.