BDLS performance on BI

Hello,
Are there any guidelines for optimizing perdormance of BDLS in BI? We recently copied our BW PRD to our BWQ and ran BDLS there. The conversion of one of our tables took several days (a /BIC/ table with over 12 million records). Any way to speed this up for future copies?
Best regards, Wilbert

/BIC/ tables are usually reporting tables (PSA, Cube, ODS), do you really need to convert the data contained in the table you mention?  You could exclude it from the BDLS job... 
Or how big is that table, if it's small enough, maybe there's an issue elsewhere...  We have a few 2-3 teraBytes tables that get converted quickly enough, anything above 10 TB we make sure don't get converted.

Similar Messages

  • BDLS performance - how to speed up BDLS run?

    Hi all,
    Although my database is not huge (1.5Tb), BDLS run takes a lot of time - 26 hours
    I found very nice blog - /people/hari.peruri/blog/2006/11/01/execute-conversion-of-logical-system-names-bdls-in-short-time-and-in-parallel--intermediate, but since I have Basis 6.40 - it is not suitable for me.
    I tried anyway - got inconsistency, I had to implement note 962674
    I guess my only workaround is to use the old program RBDLS2LS ..
    Any ideas how else I can speed up bdls run?
    Many thanks in advance,
    Best regards, Elena

    Hi Elena
    What possibly can be done in such a case could either be a reorganisation of the tables that are accessed at the database while running the transaction BDLS or customizing the source code of the program RBDLSMAP (used while calling BDLS transaction) such that the no. of accesses to some not so important tables get reduced.
    for e.g. say after a refresh ,if we Consider taking out some table such as the infamous PA2* tables (if using HR)
    and send this in the background, it will go faster because the TEST phase is ignored. you might save about 2 hours just with that.
    Regards
    Chen
    Edited by: Chetan Seth on Feb 19, 2009 4:36 PM
    Edited by: Chetan Seth on Feb 19, 2009 4:37 PM

  • Table got re-created with 0 entries

    Hello All,
                     In ECC system after refresh BDLS is taking too long time(24hrs to 36hrs) becase of that as per SDN blogs to improve BDLS performance i build the index on few tables mentioned below using SE11 & SE14 t-codes, after completing BDLS I dropped those indexes while dropping BKPF table got re created with 0 entries but other tables are having entries. i have followed sam eprocess for all the tables, after that i tried in sandbox system that system also while dropping BKPF got re-created with ' 0 'entries ,Could you please let me know why it has re-created with 0 entries.
    BKPF   
    CE11000 
    COBK   
    COEP   
    COES  
    COFIS  
    GLPCA  
    GLPCP   
    GLPCT   
    MKPF   
    SRRELROLES

    Thanks for the Prompt response,
    The service entry was created this month
    I did try by checking the "Deliv. Compl." indicator on the PO and then tried to reverse the service entry but I get the same error i.e SE541
    Thanks

  • Runtime error while executing transaction BDLS

    Hello,
    While executing BDLS transaction for converting logical system names I am getting following error
    Runtime Errors         CONVT_NO_NUMBER
    Except.                CX_SY_CONVERSION_NO_NUMBER
    Short text
         Unable to interpret "7,160 " as a number.
    How to resolve the above problem?

    Hi Rachel,
    BDLS creates an ABAP program SBDLS* which includes all the tables possibly including logical system names as of the domain behind the table fields. This program firstly    
    lives in runtime only and is checked by syntax checker.                 
    Now, sometimes tables have inconsistencies associated with them and you 
    will get this dump when a syntax check is performed on them.            
    So please check this and regenerate the culprit tables.                                                                               
    So please redo the steps for BDLS and check. There are     
    probably some tables which could be determined by you (syntax           
    check - in case program has already been saved only)                                                                               
    An easy way to check this is to run RBDLS2LS program (old BDLS) and     
    afterwards performing a syntax check on generated RBDLS*** (RBDLS<(>    
    <<)>(>                                                                  
    <<(><<)>)>ClientName>) program. 
    Known tables which caused such dumps in the past are:
    DFKKCOLFILE_P_W
    DFKKKO         
    DFKKMKO        
    DFKKREP06      
    DFKKREP06_S    
    DFKKREP07      
    DFKKREV06      
    DFKKREV07 
    Simply reactivate the tables you find out in SE11.                                                                               
    Another thing which has lead to this dump in the past is                
    in your R/3-System the value ' x ' is                                   
    additionally added to the value of your current release in the basis    
    release table SVERS.                                                    
    The table RSBASIDOC takes the release value from the table SVERS.       
    If you change value 'x' to your release in both tables (SVERS and       
    RSBASIDOC) and                                                          
    and activate again your source system connection. This could solve the 
    dump too.   
    I hope, this helps-if not, pls open a message with SAP.(component: BC-MID-ALE)
    b.rgds, Bernhard

  • BDLS in productive ERP system

    I have an interesting requirement.  A customer is currently running R/3 4.7 with multiple countries.  They want to split the system so one country has it's own system.  The approach has been to take the existing 3-system landscape and perform a homogeneous system copy to a new landscape.  The catch is that they want to use the same corporate BI system for reporting, so BDLS must be run in the new landscape since these new systems will be connecting to the same BI system as the originals.
    We know SAP doesn't officially support running BDLS in a productive system like this, but we don't see any other options.  I've looked at the SAP Landscape Transformation documents, and while the Solution Map references "Split Systems" under the "Consolidate and Reduce IT Cost" Scenario, the guides don't include any documentation for this scenario.
    I've split systems before, for example when a company sells off a subsidiary, but in those instances we left the old logical system intact.
    Does anyone have any suggestions that would allow us to split the production system and have both resulting systems connect to the same BI system for reporting purposes?
    Thanks,
    Rich

    My suggestion would be to run BDLS to change the logical system name of the ECC client only on the copied system.
    Do not run the BDLS for changing the logical system name for BI client  in the copied ECC system.
    Before you do such a critical activity , make sure you simulate the steps on a temporary environment i.e building test systems for simulation. This would be quite a good approximation of the things to come .

  • How to deal with BDLS during 4.6C -- ECC 6.0 upgrade?

    Hi,
    I am currently performing an upgrade from 4.6C to ECC 6.0 and before which, I am also performing a data migration from x86 (Intel) servers to x64 (AMD) servers. So, we have three fresh AMD servers (DEV, QAS, PRD). While the current landscape (x86) is live, I migrated the data from PRD (x86) to DEV (x64) and performing the upgrade. I used the same data to perform upgrade on QAS (x64) also.
    Now, performing the data migration and upgrade on the PRD (x64) would involve more downtime. So, in order to avoid that, have decided to split the whole process over two consecutive weekends. The plan is as follows:
    Weekend 1:
    Bring down the PRD (x86), do a complete backup and restore the same on PRD (x64), then perform the data migration activities on PRD (x64). Once this is done, we want to go live with the PRD (x64) running R/3 4.6C on MS SQL 2005 and Windows 2003, for one week (5 days - Monday through Friday).
    And during these five days, I would perform the PREPARE phase.
    Weekend 2:
    I will start the upgrade on PRD (x64) on this weekend so that, we will get sufficient time to test and do post upgrade activities and decide to go live or not.
    I hope the above procedure would work. What do you guys say?
    Anyways, my question is, normally with the DEV and QAS, I perform the migration, PREPARE, upgrade and finally do the BDLS to convert the logical system to point to the new SID. But, with PRD, I cannot do so because, the system will go live with the new SID for one week. So, I may want to perform the BDLS on R/3 4.6C running on PRD (x64) and then perform the upgrade the following weekend. My question is, is it ok to do so?
    NOTE: The SID in the x86 and x64 are not the same. I have mentioned above the same SID just for ease of reading and typing.
    Thank you.

    Hi Markus,
    Sorry, it took a while to respond to your reply to my question. Yes, I understand that we don't have to convert the logical system. But, in our case, we use the production system database copy to upgrade the new development  box. So, we have to convert the logical system in the newly upgraded Development system from the one that currently belongs to the 32 bit production system to a new logical system that is unique to the landscape. We cannot even use the logical system name that belongs to the 32 bit Development System because, that box is currently being used until we go live with the 64 bit production box. So, we will have to create a new logical system. In this case, as we want to maintain consistency in the logical system naming format across the new 64 bit landscape, we are probably forced to convert the logical system name on the 64 bit production box also.
    So, now can you please advise, is it wise to convert to the new logical system after migrating the production server from 32 bit to 64 bit but, before the upgrade to ECC 6.0? As I had mentioned in my previous question, we will be live with the 4.6C on the 64 bit box for one week after migration to 64 bit but, before upgrade. In this case, I will have to connect to APO 64 bit and BI 7.0 64 bit (RFC) for one week before upgrade which, would need the logical system.
    Or do you feel, we still can use the old logical system for the one week and after completing the upgrade, we can do the logical system conversion on the PRD to maintain naming consistency across the landscape?
    Thank you.

  • How BDLS works exactly?

    Hello,
    I've run BDLS report on my system after the system refresh and by coincidence I found following fact:
    In table EDP13 there are two columns that contain logical names - RCVPRN and RCVPOR. But only RCVPRN column has been modified by BDLS report. The other column is still having old values.
    <b>Why is this happening? Is this ok?</b>
    In help.sap.com documentation I've found following information:
    <i>The report performs the following steps:
    1. Determines all the active, transparent database tables whose fields have references to the following domains: LOGSYS, EDI_PARTNUM
    2. Converts the field values to the new logical system names
    3. Updates the database</i>
    I am not sure how to understand this.
    <b>Does this mean that table has to have a foreign key properly defined to be correctly converted? Cannot this lead to inconsistency? If so - is BDLS reliable?
    Can anyone please explain how exacly BDLS report works?</b>

    Hi John,
    Particularly for table EDP13 it is not a bug. This table stores partner profile outbound parameters and RCVPOR is the receiver- so I think it should not be changed.
    Otherwise in general - Yes, BDSL is not reliable. SAP Clearly states: "Do not use it for production systems as it is not guarantee it will convert all tables"
    BDLS works by scanning only all transparent tables and checking all fields (unless you explicitly restrict the tables range). If a feels has domain LOGSYS or EDI_PARNUM it is checked for the old Log. Sys. Name and replaced by the new one.
    It does not work in the following cases:
    -if an applications uses tables that do not reference these domains
    -if data is saved as part of fields in cluster or pool tables
    -If the new logical system name already exists in the system, it can cause errors for tables in which the logical system is key field or unique index (for example, COFIO1)
    Regards

  • Security question about Admin transaction: BDLS

    Hello Experts,
    i have a question regarding BDLS transaction: Conversion of Logical Systems. My former supervisor gave me the task to remove the BDLS transaction from all user profiles because of its critical matter. It should not be performed on Productive systems.
    As i checked SAP Notes and documentations, i found this line:
    "It is not possible to convert logical system names in a productive systemu201D
    Based on common sense, it is not possible to perform the transaction in the Productive systems and not because of Authorizations but because SAP Standard it is so defined.
    Do you have any experience with this? There are a few people that have this Transaction in our Systems but if its not possible to perform it, then i dont need to remediate it.
    Thank you, for your feedback,
    Cheerz,
    david
    Edited by: David Damaskinos on Dec 28, 2010 1:47 PM
    Edited by: David Damaskinos on Dec 28, 2010 1:48 PM

    As i checked SAP Notes and documentations, i found this line:
    "It is not possible to convert logical system names in a productive systemu201D
    Were there any condition given on the same? If so then check if the same satisfy in your production system or not. However I am in favour of removing the transaction anyway. I am not so familiar with this transaction but there might be many other way (may be program, FM etc) through which some undone might be done.
    Regards,
    Arpan Paik

  • Distubution of BDLS  run  between work process

    Hi all,
    We have the R/3  4.6c with 4.2 TB and BW  7.0 with 7.4 TB , during the refresh from   prod to quality  the BDLS steps takes  nealry 24 to 40 hrs . We have break down the tables  and kicking off mutliple BDLS jobs  like Z * tables  in BDLS run and  other set of tables in another run  so that these jobs will run parallel . During this BDLS conversion  only one  background process
    is running  for one BDLS run  is there any way to make the load distubution among available background process to so that the conversion will complete sooner.
      For example : In  R/3 i have trigger one BDLS run for CATSDB  table  in background  this will  occupy only one background process, how can we make to use of available all background process to run  for CATSDB so that conversion will be completed soon .
    Please suggest..
    Thanks,
    Subhash.G

    There is nothing wrong is running BDLS jobs in parallel. We were facing a similar situation in a project a few years back and then I had devised a solution after taking inputs from various sources and usage of parallel processing was one of steps of achieving BDLS faster. Parallel processing is not an issue at all. However parallel processing needs to be done smartly. Bigger tables ought to be excluded and for them dedicated processes can be used. For the rest you can tables for bracket them like A* to D* ,E* to F* and so on  making sure you are exclusing the larger tables. However usage of temporary indexes for big tables where not all the fields need to be filled in can be helpful. The index needs to include the field for logical system .No point in creating temporary indexes for tables where the logical system field is always full since anyways a complete table scan would be performed. Create indexes for 6-7 largest tablest of this kind. Of course increase thnumber of background processes as well temporarily.
    Regards.
    Ruchit.

  • BDLS : Table T000 was not relevant for conversion

    Hello,
    After a system refresh ( homogeneous system copy in ECC6), we performed the post copy procedure.
    I ran the BDLS transaction, all the tables were converted fine, exept the table 000.
    I still have in transaction SCC4 the former logical systeank you in dam defined.
    I checked the BDLS job log and found the message :
    "Table T000 was not relevant for conversion"
    I don't underdtand why ? Is it the normal behaviour ?
    Table T000 is not excluded in BDLSC ...
    Thank you in advance for your help.
    Best Regards.

    I just found out that I launched the BDLS from the client 000.

  • BDLS - How to monitor the process?

    Dear All,
    I am running the BDLS sessions in Background (excluding the known huge tables) now in my Testing System just after the system refresh activities from Production to the Testing.
    The BDLS sessions run more than 22 hours (still in progress now ) , may I know where ( which tcode ) I can see the progress of the BDLS?
    I've tried to check in Txcode SM50, the BGD process performing a sequential read on a table for more than 8 hours...Is it possible hang in somewhere ? May I know how to determine if the BDLS is 'hang' ?
    FYI, the table size that the BDLS running on is around 140GB..
    Best Regards,
    Ken

    Dear Raja,
      the log i get from BDLSS shows the table processed until :s
    CATSHR                    EXTSYSTEM                           0
    LOGSYS                              0
    CATS_BW_TIME              I_RLOGSYS                           0
    CATS_GUID_KEY*            EXTSYSTEM                           0
    CBPR                      LOGSYSTEM                           0
    CC1ERP                    SRCSYS                              0
    CCMCTIADMIN               LOGSYS                              0
    Dear sekhar,
    Do you meant the job log from SM37 ? :
    22.02.2010 09:59:19 Job started                                                                         00
    22.02.2010 09:59:19 Step 001 started (program RBDLSMAP, variant &0000000000000, user ID BASISADM7)      00
    22.02.2010 09:59:19 The new logical system name T00CLNT300 is assigned to the current client 300        B1
    Edited by: Wei Jian Kwan on Feb 24, 2010 1:00 AM

  • Quote Report - Performance

    I have created a quote report with narratives around the opportunity, opportunity-product, and account information. The report contains a pivot table.
    I'm getting very hit or miss performance results on the report. Sometimes the report runs faster for all opportunities compared to when its prompted with an opty id from a web link.
    Any thoughts? There doesn't seem to be one column that makes a difference.
    Also this is not an analytical report as the quotes need to be generated same day.

    Hi Shaik,
    Please remove all the join select queries and use 'for all entries' varaiant of the select query. Check whether you can create and use indexes in ur queries.
    Thanks and Regards,
    Saurabh Chhatre

  • IS NOT NULL and performance

    I have a performance issue with a query - it would be something like -
    select col1,col2, sum(col3), get_val(col_4)
    from table1
    where
    get_val(col_4) is not null
    group by col1,col2, get_val(col_4)
    I have simplified this but it is something similar. This works great - performance is great. Now I commented out the where clause as I needed to populate null values - and that's it the query does not retrieve the resultset - it keeps running forever. With the where clause it comes back in 60 seconds. There is only one row out of 560 rows that has null value for col_4 which i need to display.
    Any help is appreciated.

    The only difference I notice between the two sqls is HASH(UNIQUE) -
    with IS NOT NULL in where clause -
    SELECT STATEMENT     ALL_ROWS     1598     1     209
    HASH(UNIQUE)          1598     1     209
    HASH(GROUP BY)          1598     1     209
    When is not null is removed from the where clause -
    SELECT STATEMENT     ALL_ROWS     1598     1     206
    HASH(GROUP BY)          1598     1     206          
    I'm guessing that the index is being used in the first scenario and not in the second. Any idea/suggestion as to how to over come this?          
    Thanks

  • Issues with WebForm performance

    Hi,
    We had migrated a new application from UAT to PROD and we encounter webform performance issues with the following actions..
    After login, the first form you open it is
    always slow. After this you can switch from one form to another without a
    delay.
    Then once you change any data – the next time
    you access a form it gets delayed.
    The point to be noted is, it doesnt have any performance issues in the UAT environment and the configuration settings,essbase cache settings are all the same. We did try to move the EPM system to a new server to see if this is a hardware issue but no luck, the performance issues still exist. The consumption of the CPU/memory has been checked and it is proved that the form opening delay has nothing to do with lack of memory.
    The JVM head settings is set up at 4GB which is infact higher than that of the UAT environment(2GB).
    runs on Windows server2008,11.1.2.1.600 EPM serve, has planning,reporting,foundation on one server (32GB ram) and essbase on another server (32GB ram). Any help is appreciated.
    Please find attached the planning logs below and there is nothing mentioned in the hyperionplanning error logs.-----------------------------------------------------------------------------------------------------------------------------------------------------------------
    <Nov 5, 2013 8:05:43 PM CET> <Info> <Security> <BEA-090905> <Disabling CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true>
    <Nov 5, 2013 8:05:43 PM CET> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true>
    <Nov 5, 2013 8:05:44 PM CET> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Oracle JRockit(R) Version R28.0.2-11-135406-1.6.0_20-20100624-2119-windows-x86_64 from Oracle Corporation>
    <Nov 5, 2013 8:05:46 PM CET> <Info> <Management> <BEA-141107> <Version: WebLogic Server 10.3.4.0  Fri Dec 17 20:47:33 PST 2010 1384255 >
    <Nov 5, 2013 8:05:48 PM CET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>
    <Nov 5, 2013 8:05:48 PM CET> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool>
    <Nov 5, 2013 8:05:48 PM CET> <Notice> <Log Management> <BEA-170019> <The server log file C:\Oracle\Middleware\user_projects\domains\EPMSystem\servers\Planning0\logs\Planning0.log is opened. All server side log events will be written to this file.>
    <Nov 5, 2013 8:06:22 PM CET> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
    <Nov 5, 2013 8:06:28 PM CET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY>
    <Nov 5, 2013 8:06:28 PM CET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Calling getDomainConfiguration()
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Calling getRuntimeService()
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
      return com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean
      return com.bea:Name=EPMSystem,Type=Domain
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Domain location is 'C:\Oracle\Middleware\user_projects\domains\EPMSystem'
    Calling getRuntimeService()
      return com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Calling getConnection()
      return weblogic.management.jmx.mbeanserver.WLSMBeanServer@327b36fb
    Checking C:\Oracle\Middleware\user_projects\domains\EPMSystem\servers\Planning0\registry_update.xml file
    EPM_ORACLE_HOME: C:\Oracle\Middleware\EPMSystem11R1
    Template for PLANNING#11.1.2.0: C:\Oracle\Middleware\EPMSystem11R1\common\templates\applications\epm_planning_11.1.2.1.jar
    Dependencies for C:\Oracle\Middleware\EPMSystem11R1\common\templates\applications\epm_planning_11.1.2.1.jar: []
    BPMUI shared webapp not referenced from PLANNING#11.1.2.0
    Application name: PLANNING#11.1.2.0
    Application source: HyperionPlanning.ear
    Server name: Planning0
    Server port: 8300
    Server SSL port: 8343
    Application context: HyperionPlanning
    Registry product type: PLANNING_PRODUCT
    Registry physical web application type: PLANNING_WEBAPP
    weblogic.Name property is 'Planning0', seems to be WebLogic mode
    registry.isRegistryDatabaseCreated()true
    Registry was initialized sucessfully
    Executing pre custom update for PLANNING#11.1.2.0
    EPM_ORACLE_INSTANCE: C:\Oracle\Middleware\user_projects\epmsystem1
    Physical Web App found
    Web app already linked to some application server: false
    The registry was not modifyed because it already containse all sturctures
    Web app is already linked to the logical web app
    No needs to run custom updater for PLANNING#11.1.2.0
    loggingUpdatePLANNING.block file exist or the system is running in the Fusion mode, skipping logging.xml configuration
    Planning locale: en_US
    <Nov 5, 2013 8:06:39 PM CET> <Notice> <Log Management> <BEA-170027> <The Server has established connection with the Domain level Diagnostic Service successfully.>
    <Nov 5, 2013 8:06:39 PM CET> <Notice> <Cluster> <BEA-000197> <Listening for announcements from cluster using unicast cluster messaging>
    <Nov 5, 2013 8:06:39 PM CET> <Notice> <Cluster> <BEA-000133> <Waiting to synchronize with other running members of Planning.>
    <Nov 5, 2013 8:07:09 PM CET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN>
    <Nov 5, 2013 8:07:09 PM CET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING>
    <Nov 5, 2013 8:07:09 PM CET> <Notice> <Cluster> <BEA-000162> <Starting "async" replication service with remote cluster address "null">
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under the alias DemoIdentity from the jks keystore file C:\Oracle\MIDDLE~1\WLSERV~1.3\server\lib\DemoIdentity.jks.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file C:\Oracle\MIDDLE~1\WLSERV~1.3\server\lib\DemoTrust.jks.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file C:\Oracle\Middleware\jrockit_160_20\jre\lib\security\cacerts.>
    <Nov 5, 2013 8:07:10 PM CET> <Alert> <Security> <BEA-090152> <Demo trusted CA certificate is being used in production mode: [
      Version: V3
      Subject: CN=CACERT, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US
      Signature Algorithm: MD5withRSA, OID = 1.2.840.113549.1.1.4
      Key:  Sun RSA public key, 512 bits
      modulus: 9550192877869244258838480703390456015046425375252278279190673063544122510925482179963329236052146047356415957587628011282484772458983977898996276815440753
      public exponent: 65537
      Validity: [From: Thu Mar 21 21:12:27 CET 2002,
                   To: Tue Mar 22 21:12:27 CET 2022]
      Issuer: CN=CACERT, OU=FOR TESTING ONLY, O=MyOrganization, L=MyTown, ST=MyState, C=US
      SerialNumber: [    33f10648 fcde0deb 4199921f d64537f4]
    Certificate Extensions: 1
    [1]: ObjectId: 2.5.29.15 Criticality=true
    KeyUsage [
      Key_CertSign
      Algorithm: [MD5withRSA]
      Signature:
    0000: 9D 26 4C 29 C8 91 C3 A7   06 C3 24 6F AE B4 F8 82  .&L)......$o....
    0010: 80 4D AA CB 7C 79 46 84   81 C4 66 95 F4 1E D8 C4  .M...yF...f.....
    0020: E9 B7 D9 7C E2 23 33 A4   B7 21 E0 AA 54 2B 4A FF  .....#3..!..T+J.
    0030: CB 21 20 88 81 21 DB AC   90 54 D8 7D 79 63 23 3C  .! ..!...T..yc#<
    ] The system is vulnerable to security attacks, since it trusts certificates signed by the demo trusted CA.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=thawte Primary Root CA - G3,OU=(c) 2008 thawte\, Inc. - For authorized use only,OU=Certification Services Division,O=thawte\, Inc.,C=US". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=T-TeleSec GlobalRoot Class 3,OU=T-Systems Trust Center,O=T-Systems Enterprise Services GmbH,C=DE". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=T-TeleSec GlobalRoot Class 2,OU=T-Systems Trust Center,O=T-Systems Enterprise Services GmbH,C=DE". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=GlobalSign,O=GlobalSign,OU=GlobalSign Root CA - R3". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "OU=Security Communication RootCA2,O=SECOM Trust Systems CO.\,LTD.,C=JP". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=VeriSign Universal Root Certification Authority,OU=(c) 2008 VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust Network,O=VeriSign\, Inc.,C=US". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=KEYNECTIS ROOT CA,OU=ROOT,O=KEYNECTIS,C=FR". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Security> <BEA-090898> <Ignoring the trusted CA certificate "CN=GeoTrust Primary Certification Authority - G3,OU=(c) 2008 GeoTrust Inc. - For authorized use only,O=GeoTrust Inc.,C=US". The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure[3]" is now listening on 127.0.0.1:8343 for protocols iiops, t3s, CLUSTER-BROADCAST-SECURE, ldaps, https.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure[1]" is now listening on fe80:0:0:0:0:5efe:a53:4816:8343 for protocols iiops, t3s, CLUSTER-BROADCAST-SECURE, ldaps, https.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure[4]" is now listening on 0:0:0:0:0:0:0:1:8343 for protocols iiops, t3s, CLUSTER-BROADCAST-SECURE, ldaps, https.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "Default" is now listening on 10.83.72.22:8300 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "Default[3]" is now listening on 127.0.0.1:8300 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "Default[4]" is now listening on 0:0:0:0:0:0:0:1:8300 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "Default[2]" is now listening on fe80:0:0:0:0:ffff:ffff:fffe:8300 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "Default[1]" is now listening on fe80:0:0:0:0:5efe:a53:4816:8300 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure[2]" is now listening on fe80:0:0:0:0:ffff:ffff:fffe:8343 for protocols iiops, t3s, CLUSTER-BROADCAST-SECURE, ldaps, https.>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on 10.83.72.22:8343 for protocols iiops, t3s, CLUSTER-BROADCAST-SECURE, ldaps, https.>
    <Nov 5, 2013 8:07:10 PM CET> <Warning> <Server> <BEA-002611> <Hostname "WIPLPRD01.svc.unicc.org", maps to multiple IP addresses: 10.83.72.22, 0:0:0:0:0:0:0:1>
    <Nov 5, 2013 8:07:10 PM CET> <Notice> <WebLogicServer> <BEA-000330> <Started WebLogic Managed Server "Planning0" for domain "EPMSystem" running in Production Mode>
    <Nov 5, 2013 8:07:12 PM CET> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING>
    <Nov 5, 2013 8:07:12 PM CET> <Notice> <WebLogicServer> <BEA-000360> <Server started in RUNNING mode>
    using java.library.path: C:\Oracle\Middleware\EPMSystem11R1/products/Planning/lib64;C:\Oracle\Middleware\EPMSystem11R1/bin;C:\Oracle\Middleware\EPMSystem11R1/common/EssbaseRTC-64/11.1.2.0/bin;C:\Oracle\MIDDLE~1\patch_wls1034\profiles\default\native;C:\Oracle\MIDDLE~1\WLSERV~1.3\server\native\win\x64;C:\Oracle\MIDDLE~1\WLSERV~1.3\server\bin;C:\Oracle\MIDDLE~1\modules\ORGAPA~1.1\bin;C:\Oracle\MIDDLE~1\JROCKI~1\jre\bin;C:\Oracle\MIDDLE~1\JROCKI~1\bin;C:\Oracle\MIDDLE~1\WLSERV~1.3\server\native\win\x64\oci920_8
    EPM_ORACLE_HOME (C:\Oracle\Middleware\EPMSystem11R1) is set from JVM property "EPM_ORACLE_HOME".
    using Java property for Hyperion Home C:\Oracle\Middleware\EPMSystem11R1
    EPM_ORACLE_INSTANCE (C:\Oracle\Middleware\user_projects\epmsystem1) is set from JVM property[EPM_ORACLE_INSTANCE].
    Reaquired task list lease: Tue Nov 05 20:11:49 CET 2013: 1383678709156
    Seeking ESAPI.properties
      Found in 'org.owasp.esapi.resources' directory: C:\Oracle\Middleware\EPMSystem11R1\products\Planning\config\esapi\ESAPI.properties
    Loaded 'ESAPI.properties' properties file
    Seeking validation.properties
      Found in 'org.owasp.esapi.resources' directory: C:\Oracle\Middleware\EPMSystem11R1\products\Planning\config\esapi\validation.properties
    Loaded 'validation.properties' properties file
    Seeking antisamy-esapi.xml
      Found in 'org.owasp.esapi.resources' directory: C:\Oracle\Middleware\EPMSystem11R1\products\Planning\config\esapi\antisamy-esapi.xml
    EnterData_Inner Processing Time:424
    2013-11-05 20:14:47,454 INFO Thread-51 calcmgr.launch - Date/Time Started: 2013/11/05:20:14:47.452 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: WPA_Count By Planning user: wipoadmin Values entered for run-time prompts: [Variable] Wrk_Scenario:"Work_Plan_2014"[Variable] Funds:"Regular"[Variable] Units:"0001"
    - Date/Time Started: 2013/11/05:20:14:47.452 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: WPA_Count By Planning user: wipoadmin Values entered for run-time prompts: [Variable] Wrk_Scenario:"Work_Plan_2014"[Variable] Funds:"Regular"[Variable] Units:"0001"
    2013-11-05 20:14:54,066 INFO Thread-51 calcmgr.launch - Date/Time Ended: 2013/11/05:20:14:54.066 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: WPA_Count By Planning user: wipoadmin.
    - Date/Time Ended: 2013/11/05:20:14:54.066 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: WPA_Count By Planning user: wipoadmin.
    EnterData_Inner Processing Time:64
    EnterData_Inner Processing Time:15
    EnterData_Inner Processing Time:359
    EnterData_Inner Processing Time:2
    EnterData_Inner Processing Time:53
    EnterData_Inner Processing Time:4
    EnterData_Inner Processing Time:7
    EPM_ORACLE_INSTANCE (C:\Oracle\Middleware\user_projects\epmsystem1) is set from JVM property[EPM_ORACLE_INSTANCE].
    EPM_ORACLE_INSTANCE (C:\Oracle\Middleware\user_projects\epmsystem1) is set from JVM property[EPM_ORACLE_INSTANCE].
    EPM_ORACLE_INSTANCE (C:\Oracle\Middleware\user_projects\epmsystem1) is set from JVM property[EPM_ORACLE_INSTANCE].
    Setting HBR Mode to: 2
    In lookupBRLWA()
    Found HBR product = ESSBASE_PRODUCT
    Found HBR product = ESSBASE_PRODUCT
    Found HBR product = ESSBASE_PRODUCT
    HBR LWA Component = Default
    Default HBR = http://WIPLPRD01.svc.unicc.org:19000/eas
    In getDBDetails()
    Found HBR product = ESSBASE_PRODUCT
    In lookupBRLWA()
    Found HBR product = ESSBASE_PRODUCT
    Found HBR product = ESSBASE_PRODUCT
    =2013-11-05 20:35:08,234 WARN [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' com.hyperion.hbr.security.HbrSecurityAPI - Error retrieving user by identity
    - Error retrieving user by identity
    Embedded HBR initialized.
    EnterData_Inner Processing Time:6
    EnterData_Inner Processing Time:867
    [Tue Nov 05 20:35:33 CET 2013] Planning successfully notified HBR repository.
    EnterData_Inner Processing Time:7
    2013-11-05 20:40:38,613 INFO Thread-67 calcmgr.launch - Date/Time Started: 2013/11/05:20:40:38.606 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: NonPersonnel_Calc By Planning user: wipoadmin Values entered for run-time prompts: [Variable] Funds:"Regular"[Variable] Units:"0001"[Variable] Wrk_Scenario:"Work_Plan_2014"
    - Date/Time Started: 2013/11/05:20:40:38.606 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: NonPersonnel_Calc By Planning user: wipoadmin Values entered for run-time prompts: [Variable] Funds:"Regular"[Variable] Units:"0001"[Variable] Wrk_Scenario:"Work_Plan_2014"
    2013-11-05 20:40:47,241 INFO Thread-67 calcmgr.launch - Date/Time Ended: 2013/11/05:20:40:47.241 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: NonPersonnel_Calc By Planning user: wipoadmin.
    - Date/Time Ended: 2013/11/05:20:40:47.241 CET Server/Application/Database: localhost/1415_WP/AWP Business Rule Name: NonPersonnel_Calc By Planning user: wipoadmin.
    EnterData_Inner Processing Time:44
    EnterData_Inner Processing Time:2
    EnterData_Inner Processing Time:525
    EnterData_Inner Processing Time:1
    Reaquired task list lease: Tue Nov 05 20:41:49 CET 2013: 1383680509246

    I already replied there but it seems no moderator is willing to approve my reply so it never shows up.
    So I'll try replying in here instead:
    I never thought it could be an emulator issue, I thought I had done something wrong to begin with. So I take having performance issues with tiled layer is not a common problem I suppose?
    I'm using MOTODEV SDK platform, A1200 model (motorola).

  • Apple DVI to Video Adapter performance issues in Lion

    I'm an active Apple DVI to Video Adapter user for a couple of years already and everything worked perfectly well until I've upgraded to Lion.
    Everything on my TV screen is lagging now, even mouse pointer, and there is no change to watch any movie like that.
    Does anyone have the same issue? Maybe someone could give me an advice of what could I do in order to improve graphics performance on TV screen using Apple DVI to Video Adapter? I'm using late 2007 MBP with 8600M GT video adapter.

    I am having the exact same problem.  I have a late 2007 MBP. I use it everyday where I teach.  I use the adapter to project on a large white screen using S-Video.  SInce the Lion update, it is very slow and has even completely crashed my computer a few times.  I realize the MBP is getting a bit dated, but the difference between Snow Leopard & Lion is remarkable.

Maybe you are looking for

  • Use 2 carriers on at a time and keep the settings active?

    Can somebody please advice me on the following: I use 1 Dutch Vodafone BB subsription + device while in Europe and 1 Indonesian Telkomsel BB subsription + device while in Asia. I now would like to go back to 1 device and replace my Dutch or Indonesia

  • Using airport disc as a music server, and playing music over airtunes

    I know that I can put my itunes library on an external hard drive, and that I could access this external hard drive wirelessly if I attached it to an airport extreme. However, if i did this would I still be able to play music over airtunes. It seems

  • [Solved] Root device doesn't exist

    Hi! EDIT SOLVED: Booted from a arch-core CD and did a; mv /etc/mkinitcpio.conf.pacnew /etc/mkinitcpio.conf pacman -U /var/cache/pacman/pkg/kernel26-2.6.25-4-1-i686.pkg.tar.gz Seems like a had a configuration error in my old mkinitcpio.conf file and k

  • PO Release TAB not trigered in Purchase Order

    Dear all, We have created a Release strategy in our test system.Its works fine.But when we create the same in our clients development system the "Release Strategy "Tab is not triggered in Purchase order.We did the following steps while creation 1) Cr

  • Applications won't launch from dock

    I have recently had to re install tiger and was not able to use archive function so had to erase. Most files etc were OK and was able to shift stuff around between the macs hard drive and an external storage device. Lost of stuff got duplicated and I