Strange load performance...

Hi, folks!
My SWF-files don´t seem to load the way i want after publication. I usually build in a kind of loading bar to show loading process while loading, it should be visible right away, till the main movie starts playing. But the browser seems to wait, it doesn´t show anything, behaves like if it were waiting for the whole lot to be downloaded before it can show the very first frames, which lets my loading-bar look pretty senseless. Why the hell does it do that? Is there anything in the settings i don´t know about? Could be the problem somewhere else, and not within the Flash-environment?
Thanks a lot for any help!

I´m dead sure nothing is being exported in the first frame, I´ve checked it carefully about fifty odd times, specially after you and ross called my attention upon it. If there were anything big there, it would show up on the bandwith profiler while checking download performance -it does if I deliberately check the "export in first frame" box as a try, the simulated download behaviour looks then very much like the one I´m experiencing "under real fire" though having taken entirely different settings...!! I´m going mad about this, guys...
What about the entries in the HTML-file:
<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=8,0,0,0" width="1000" height="590" align="middle">
<param name="allowScriptAccess" value="sameDomain" />
<param name="movie" value="pict/salvadore_images.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#323642" /><param name="SCALE" value="exactfit" /><embed src="pict/salvadore_images.swf" width="1000" height="590" align="middle" quality="high" bgcolor="#323642" scale="exactfit" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
</object>
Anything wrong, anything missing?
Thanks again for any help!

Similar Messages

  • Unable to load performance pack, using Java I/O on WL60, sp2

    Dear friends,
    I am seeking help from you. When we start WL60 SP2 on Sun Soloris 5.6, we got
    the following exception:
    <Jul 31, 2001 5:39:53 PM EDT> <Error> <Performance Pack> <Unable to load performance
    pack, using Java I/O.
    java.lang.UnsatisfiedLinkError: getFdLimit
    at weblogic.socket.PosixSocketMuxer.getFdLimit(Native Method)
    at weblogic.socket.PosixSocketMuxer.<init>(PosixSocketMuxer.java:104)
    at java.lang.Class.newInstance0(Native Method)
    at java.lang.Class.newInstance(Class.java:237)
    at weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:128)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:83)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:224)
    >
    Hoever, the server itself started, and our applications runs ok
    (at least so far). But this exception appears every time on some user accounts.
    I was wondering what causes this exception. Some user accounts in the same machine
    don't have this problem.
    I also wondering if it will cause performance problem when the traffic is high.
    We already applied the patches.
    Any hits and suggestions are welcome.
    Thanks in advance.
    -Ju

    Dear Deyan,
    Thanks for your help. We do have $WEBLOGIC_HOME/lib/solaris in LD_LIBRARY_PATH,
    which is set when running ". setEnv.sh" before startWebLogic.sh.
    We failed on one patch: 105210-27, for some reason.
    The strange thing is: in the same machine, all WL60 instances running under user
    accounts (under /users/developers/) have no such error. But it happens under some
    account, like accounts under /export/home/, etc. /user/developers is mounted on
    another physical machine.
    -Ju
    "Deyan D. Bektchiev" <[email protected]> wrote:
    >
    You should have the $WEBLOGIC_HOME/lib/solaris directory in your LD_LIBRARY_PATH
    so that
    the server can load the performance pack (which is a shared library called
    libmuxer.so).
    If it is present then do a ldd libmuzer.so and you will see if any libraries
    that it
    depends on are missing.
    Also make sure you have all of the requered patches for 2.6 installed.
    --dejan
    Ju Rao wrote:
    Dear friends,
    I am seeking help from you. When we start WL60 SP2 on Sun Soloris 5.6,we got
    the following exception:
    <Jul 31, 2001 5:39:53 PM EDT> <Error> <Performance Pack> <Unable toload performance
    pack, using Java I/O.
    java.lang.UnsatisfiedLinkError: getFdLimit
    at weblogic.socket.PosixSocketMuxer.getFdLimit(Native Method)
    at weblogic.socket.PosixSocketMuxer.<init>(PosixSocketMuxer.java:104)
    at java.lang.Class.newInstance0(Native Method)
    at java.lang.Class.newInstance(Class.java:237)
    at weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:128)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:83)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:224)
    >
    Hoever, the server itself started, and our applications runs ok
    (at least so far). But this exception appears every time on some useraccounts.
    I was wondering what causes this exception. Some user accounts in thesame machine
    don't have this problem.
    I also wondering if it will cause performance problem when the trafficis high.
    We already applied the patches.
    Any hits and suggestions are welcome.
    Thanks in advance.
    -JuContent-Description: Card for Deyan D. Bektchiev
    begin:vcard
    n:Bektchiev;Deyan
    tel;home:1-650-363-6055
    tel;work:1-650-289-1046
    x-mozilla-html:TRUE
    url:http://www.appl.net/
    org:Application Networks
    adr:;;444 Ramona St;Palo Alto;CA;94301;USA
    version:2.1
    email;internet:[email protected]
    fn:Deyan D. Bektchiev
    end:vcard

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Error BEA-000438 - Unable to load performance pack. Using Java I/O instead.

    On a Solaris 9 machime, 64 bits architecture, j2sdk1.4.2_08, Weblogic Server 8.1 SP2
              when I try to deploy the application, launching java wiht "-d64" option I get :
              <Jun 22, 2005 12:12:41 PM CEST> <Error> <Socket> <BEA-000438> <Unable to load performance pack. Using Java I/O instead.
              Please ensure that libmuxer library is in
              :'/export/home/j2se/j2sdk1.4.2_08/jre/lib/sparcv9/server:/export/home/j2se/j2sdk1.4.2_08/jre/lib/sparcv9:/export/home/Aplics/Apl1/WEB-INF/lib::/usr/local/bea/weblogic81/server/lib/solaris:/usr/local/bea/weblogic81/server/lib/solaris/oci920_8:/usr/lib'
              libmuxer exist on /usr/local/bea/weblogic81/server/lib/solaris
              Any idea?
              Thanks

              Can you post more details ?
              Sergi
              Jiffy <[email protected]> wrote:
              >error:
              > <2004-3-12 %u4E0B%u534815%u65F648%u520654%u79D2 CST> <Error> <Socket>
              ><BEA-000438> <Unable to load performance pack. Using Java I/O instead.
              >Please ensure that wlntio.dll is in: 'D:D:/bea/weblogic81/server/bin'
              >>
              

  • Unable to load performance pack using JRockit 26.3 on Solaris 10

    Hi all,
    I'm getting the following error when starting up my WebLogic 9.1 server instance on Solaris 10. I've recently upgraded from Sun's JVM to JRockit 26.3. I do not get this error when using Sun's JVM.
    <May 16, 2006 9:16:38 AM EDT> <Error> <Socket> <BEA-000438> <Unable to load performance pack. Using Java I/O instead. Please ensure that a native performance library is in: '/usr/local/bea91_dev1/jrockit-R26.3.0-jdk1.5.0_06/jre/lib/sparcv9/jrockit:/usr/local/bea91_dev1/jrockit-R26.3.0-jdk1.5.0_06/jre/lib/sparcv9:/usr/local/bea91_dev1/jrockit-R26.3.0-jdk1.5.0_06/jre/../lib/sparcv9:/usr/local/bea91_dev1/patch_weblogic910/profiles/default/native:/usr/local/bea91_dev1/weblogic91/server/native/solaris/sparc:/usr/local/bea91_dev1/weblogic91/server/native/solaris/sparc/oci920_8:/usr/lib'
    Here's the content of the /usr/local/bea91_dev1/weblogic91/server/native/solaris/sparc folder:
    libfastfile.so libstackdump.so libwlfileio2.so
    libjmsc.so libterminalio.so wlauth
    libmuxer.so libweblogicunix1.so wlkeytool
    libnodemanager.so libwlenv.so
    I also have a sparc64 directory with the same file names.
    Can anyone shed some light?
    Regards,
    Daniel

    JRockit on Solaris is 64-bit so it needs the 64-bit performance pack files, so you need to have $WL_HOME/server/native/solaris/sparc64 in the LD_LIBRARY_PATH instead of $WL_HOME/server/native/solaris/sparc
    Bob

  • What are the better load/performance testing tools available for Flex Application with BlazeDS RO?

    In my application is designed with Flex3, ActionScript3, BlazeDS Remote Objects.
    Just i tried with OPENSTA but i cant do the dynamic parameterization in their generated scripts because the response of the calls is binary values and also we cant get the response using with SCL language.
    While testing with OPENSTA with HttpService, i can do the dynamic parameterization and got the response.
    can give the information about the below questions
    whether we can do dynamic parameterization with OPENSTA for Flex Remote objects?
    and  what are the better load/performance tools available for Flex Remote Objects?

    Your approach is fine, depending on how many and what type of CFCs you are talking about. If they are "singletons" - that is, only one instance of each CFC is needed to be in memory and can be reused/shared from multiple parts of your application - caching them in the application scope is common.  Just make sure they are thread safe ("var" or local.* all your method variables).
    You might consider taking advantage of a dependency injection framework, such as DI/1 (part of the FW/1 MVC framework), ColdSpring, or WireBox (a module of the ColdBox platform that can be used independently).  They have mechanisms for handling and caching singletons.  Then you wouldn't have to go to the application scope to get your CFC instances.
    -Carl V.

  • FORMS CRASHES (FRM-92101) ON AS 10.1.2.0.2 DURING LOAD PERFORMANCE TESTING

    Hiya
    We have been doing Load Performance Testing using testing tool QALoad on our Forms 10g application. After about 56 virtual users(sessions) have logged-in into our application, if a new user tries to log-in into our application, the Forms crashes. As soon as we encounter the FRM-92101 error, no more new forms session are able to start.
    The Load Testing software start up each process very quickly, about every 10 seconds.
    The very first form that appears is the login form of our application. So before the login screen appears, we get FRM-92101 error message.
    However, those users who have already logged-in into our application, they are able to carry on their tasks.
    We are using Application Server 10g 10.1.2.0.2. I have checked the status on Application Server through Oracle Enterprise Manager Console. The OC4J Instance is up and running. Also, server's configuration is pretty good. It is running on 2 CPUs (AMD Opteron 3GHz) and has 32GB of memory. The memory used by those 56 sessions is less than 3GB.
    The Applicatin Server is running on a Microsoft Windows Server 2003 64bit Enterprise Edition.
    Any help will be much appreciated.
    Cheers
    Mayur

    Hi Shekhawat
    In Windows Registry go to
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems
    In the right hand side panel, you will find String Value as Windows. Now double click on it (Windows). In the pop up window you will see a string similar to the following one:
    %SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
    Now if you read it carefully in the above string, you will find this parameter
    SharedSection=1024,20480,768
    Here SharedSection specifies the system and desktop heaps using the following format:
    SharedSection=xxxx,yyyy,zzzz
    The default values are 1024,3072,512
    All the values are in Kilobytes (KB)
    xxxx = System-wide Heapsize. There is no need to modify this value.
    yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop.
    zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop.
    On our server the values were as follows :
    1024,20480,768
    We changed the size of Non-IO desktop heapsize from 768 to 5112. With 5112 KB we managed to test our application for upto 495 virtual users.
    Cheers
    Mayur

  • Initial Load Performance Decrease

    Hi colleagues,
    We noticed a huge decrease initial load performance after installing an
    application in the PDA.
    Our first test we downloaded one data object with nearly 6.6Mb that
    corresponds to 30.000 registries with eight fields each. Initial Load
    to PDA took only 2 minutes.
    We performed a second test with same PDA after a reinstallation and
    new device ID. The difference here is that we installed an MI
    application related to same data object. Same amount of data was sent
    to the PDA. It took 3 hours to download it.
    In third test we change the application in order not to have the
    related data object assigned to it. In this case, download took 2
    minutes again.
    In other words, if we have an application with the data object
    assigned, it results in a huge decrease in initial load.
    In both cases we use direct connection to our LAN.
    Here goes our PDA specs:
    - Windows Mobile 6 Classic
    - Processor: Marvell PXA310 de 624MHz
    - 64MB RAM, 256MB flash ROM (190MB available to user)
    Any similar experiences?
    Thanks.
    Edited by: Renato Petrulis on Jun 1, 2010 4:15 PM

    I am confused on downloading a data object with no application.
    I thought you can only download data if it is associated to a Mobile Component, I guess you just assign the DMSCV manually?
    In any case, I have only experienced scenario two when we were downloading application with mobile component with no packaging of messages.  we had maybe a few thousand records to download and process and it would take an hour or more.
    When we enabled packaging, it would take 15-30 minutes.
    Then I went to Create Setup Package because it was just simpler to install the application and data together with no corruption or failure of DMSCV not going operational and not sending data etc... plus it was a faster download using either FTP or ActiveSync to transfer the install files.

  • To improve data load performance

    Hi,
    The data is getting loaded into the cube. Here there are no routines in update rules and transfer rules. Direct mapping is done to the infoobjects.
    But there is an ABAP routine written for 0CALDAY in the infopackage . Other than the below code , there is no abap code written anywhere. For 77 lac records it is taking more than 10 hrs to load. Any possible solutions for improving the data load performance.
      DATA: L_IDX LIKE SY-TABIX.
      DATA: ZDATE LIKE SY-DATUM.
      DATA: ZDD(2) TYPE N.
      READ TABLE L_T_RANGE WITH KEY
           FIELDNAME = 'CALDAY'.
      L_IDX = SY-TABIX.
    *+1 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) = '12'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE+4(2) = '01'.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 1.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ENDIF.
    *+3 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) => '10'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE4(2) = ZDATE4(2) + 3 - 12.
        ZDATE+6(2) = '01'.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 3.
        ZDATE+6(2) = '01'.
      ENDIF.
      CALL FUNCTION 'FIMA_END_OF_MONTH_DETERMINE'
        EXPORTING
          I_DATE                   = ZDATE
        IMPORTING
          E_DAYS_OF_MONTH          = ZDD.
      ZDATE+6(2) = ZDD.
      L_T_RANGE-HIGH = ZDATE.
      L_T_RANGE-SIGN = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      MODIFY L_T_RANGE INDEX L_IDX.
      P_SUBRC = 0.
    Thanks,
    rani

    i dont think this filter routine is causing the issue...
    please implement performance impovement methods..
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap

  • Sql Loader Performance

    Hi
    i have some question about SQL Loader. i have to find answers but did not get from Google or Documentation. i want to know that is there any way to check whether Sql loader inserting records with Direct path or Conventional path. As we know there are restriction in Direct load. Direct-path inserts do not support all objects that conventional inserts do. Their functionality
    is restricted. If the database engine is not able to execute a direct-path insert, the operation is silently converted into a conventional insert. i have instruct Sql loader to insert using Direct=true and parallel as well. but it take 15 mint to loader 4 million record in table. i have observed its transfer rate is bit slow. i have oracle 11R2 on windows 2008 with 40GB RAM and SAN. how can i verify during execution either Sql loader load user Direct path or silently converted it conventional path. here is my sample control file. function used in control file will convert direct path to conventional?
    /c sqlldr userid='MSNV5Star/Aa123456@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=srv01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl)))' control='C:\ControlFile.txt' log='C:\ Final Data.log' bad= 'C:\ Final Data.bad'  direct=true  PARALLEL=TRUE  skip=1 Errors=5000000
    LOAD DATA 
    INFILE 'C:\adeel loading\in\A06052010.txt'APPEND
    INTO TABLE GN_FILE_DATA_TABLE
    FIELDS TERMINATED BY "     " TRAILING NULLCOLS
    Operational_Date "to_date(:Operational_Date, 'YYYYMMDD')" ,
    Store_Code "TRIM(:Store_Code)" ,
    Txn_Void_Flag "TRIM(:Txn_Void_Flag)" ,
    Txn_Staff_Flag "TRIM(:Txn_Staff_Flag)" ,
    Txn_Aborted_Flag "TRIM(:Txn_Aborted_Flag)"     Edited by: Oracle Studnet on May 30, 2011 7:18 AM

    Pl do not post duplicates - sql Loader Performance
    Srini

  • Data Load performance in BI7.0

    Hi,
    I have a generic question regarding BI7.0.
    From the perspective of data load performance what are the features  that BI7.0 has compared to earlier versions.
    Thanks in advance,,
    Rama Murthy

    Hi,
    In BI, the entry layer is PSA, and it is mandatory to maintain the PSA to maintain the data in BI from any source. Here(in BI) it is possible to make the PSA as typed and untyped.
    The Infopackage functionality is reduced, it will loads data up to PSA only.
    The DTP upload the data between BI object in BI. Transformations replaces the updates rules and transfer rules.
    DTP and Transformations removes the Data mart interface between the BI objects.
    It is possible that, if no transformation of data is required, we can load data directly to target without maintaining the InfoSource.
    All these properties are available bcoz of the new concept and new object type DataSource ie., BI DataSource (object type RSDS).
    Depending on the situation, the InfoSource is not mandatory and at some times it is mandatory but PSA is mandatory in BI and rest of all are same as in 3.x.
    Hope this helps in solving u r problem
    Regards
    Ramakrishna Kamurthy

  • SCD 2 load performance with 60 millions records

    Hey guys!
    I'm wondering what would be the load performance for a type 2 SCD mapping based on the framework presented in the transformation guide (page A1-A20). The dimension has the following characteristics:
    60 millions records
    50 columns (including 17 to be tracked for changes)
    Has anyone come across a similar case?
    Mark or Igor- Is there any benchmark available on SCD 2 for large dimensions?
    Any help would be greatly appreciated.
    Thanks,
    Rene

    Rene,
    It's really very difficult to guesstimate the loading time for a similar configuration. Too many parameters are missing, especially hardware. We are in the process of setting up some real benchmarks later this year - maybe you can give us some interesting scenarios.
    On the other side, 50-60 million records is not that many these days... so I personally would consider anything more than several hours (on a half decent hardware) as too long.
    Regards:
    Igor

  • Extractor Designing to improve the Load performance.

    Hi all,
    I am extracting the data from MM application for this i am using the LO  2lis_02_itm extractor and i had enhanced it with 32 field and its happering my data load performance.
    Could u pls let me know,  how can i improve the data load performance.
    Do i need to create the different Generic Extractors instead of enhancing the LO.
    The DSO is also having many fields in it. Should i split it into 2 and create the Multiprovider for reporting.
    Regards
    KK

    Hello,
    my suggestion would be to create another generic DS for the logcal set of fields to be required in BI.
    then you can load then seperately to different DSOs and then to single IC or to two IC and use MP to report on them.
    Further you can check the below links:
    Extraction-Enhancement-Performance problem
    Increase dataload performance
    Dataload Performance
    Performance Enhancement for Custom Data Extractor
    Regards,
    Dhanya

  • In SAP BW, Errors related to authorizations,loading,performance,locking

    Hi gurus,
    can any one send the document regarding the errors in SAP BW related to authorizations,loading,performance,locking,retraction.Please send it ASAP.

    Hi Sudheer,
    Check this link which explains you the document on Authorization
    <removed link farm>
    hope it helps u..........
    Regards
    chandra sekhar
    Edited by: Siegfried Szameitat on Nov 3, 2008 11:01 AM
    posting link farms is against the rules.

  • Unable to load performance pack

    Hi all,
    Can anyone solve this problem.
    I am starting weblogic server and it gives me message
    "Unable to load performance pack, using java i/o.
    It starts the server but it gives me above exception also.
    Thanks.
    Zahid.

    Zahid,
    Can you please provide the server version , service pack level and the
    OS information.
    Thanks
    [email protected] wrote:
    I get the same error. Here's the error message I get:
    <Jan 28, 2002 10:57:59 AM EST> <Error> <Performance Pack> <Unable to
    load perfor
    mance pack, using Java I/O.
    java.lang.UnsatisfiedLinkError: no wlntio in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1312)
    at java.lang.Runtime.loadLibrary0(Runtime.java:749)
    at java.lang.System.loadLibrary(System.java:820)
    at weblogic.socket.NTSocketMuxer.<init>(NTSocketMuxer.java:173)
    at java.lang.Class.newInstance0(Native Method)
    at java.lang.Class.newInstance(Class.java:237)
    at
    weblogic.socket.SocketMuxer.makeTheMuxer(SocketMuxer.java:126)
    at weblogic.socket.SocketMuxer.getMuxer(SocketMuxer.java:83)
    at weblogic.t3.srvr.ListenThread.run(ListenThread.java:232)
    zahid wrote:
    Hi all,
    Can anyone solve this problem.
    I am starting weblogic server and it gives me message
    "Unable to load performance pack, using java i/o.
    It starts the server but it gives me above exception also.
    Thanks.
    Zahid.

Maybe you are looking for

  • Can I "reset" hard drive capacity

    I'm not sure how it happened (although I suspect it was during an aborted install of Bootcamp), but an 80GB external (firewire) hard drive now shows up as a 55GB hard drive. I have tried everything I can think of -- well, OK, Disk Utility -- but I ju

  • How do I remove a DB from shared memory in Solaris 10?

    I'm having trouble removing  an in-memory database placed in shared memory. I set SHM key and cache size, and then open an environment with flags: DB_CREATE | DB_SYSTEM_MEM | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_LOCK | DB_INIT_TXN. I also set the fl

  • Dynamic tables with XML schema binding

    Hello, I'm currently fighting a strange issue dealing with a complex dynamic form. This form contains multiple dynamic tables to which the user may add new lines or delete lines that are no longer needed. This is being implemented using the instanceM

  • Leopard 10.5.2, Blender, and MacBook

    I had been eagerly waiting for Leopard to be updated to 10.5.2 to see if the bug that Blender is suffering from on the MacBook that causes horrible UI performance was fixed on Apple's end. Unfortunately it looks like it is still there. The Blender de

  • Error message -3256

    I can't view Itunes and get the above message. This has worked for some time but has suddenly decided not to work. I get the above error message. I have Tesco Antivirus Software. Can anyone help? This is a big problem! Thanks,