Relationship between ERPi metadata rules and Data load rules

I am using version 11.1.1.3 to load EBS data to ERPi and then to FDM. I have created a Metadata rule where I have assigned a ledger from the Source Accounting Entities. I have also created a Data Load rule that references the same ledger as the metadata rule. I have about 50 ledgers that I have to integrate so I have added the source adapters that reference the Data Load rule.
My question is... What is the relation between the Meatdata rule and the Data load rule with the ledger? If you can specify only one Ledger per metadata rule, then how does FDM know to use another metadata rule with another ledger attached to it? Thanks!!!

Aksik
1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
Hope this helps
Thnaks
Sat

Similar Messages

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Getting Error The trust relationship between the primary domain and the trusted domain failed in SharePoint 2010

    Hi,
    SharePoint 2010 Backup has been taken from production and restored through Semantic Tool in one of the server.The wepapplication of which the backup was taken is working fine.
    But the problem is that the SharePoint is not working correctly.We cannot create any new webapplication ,cannot navigate to the ServiceApplications.aspx page it shows error.Even the Search and UserProfile Services of the existing Web Application is not working.Checking
    the SharePoint Logs I found out the below exception
    11/30/2011 12:14:53.78  WebAnalyticsService.exe (0x06D4)         0x2D24 SharePoint Foundation          Database                     
     8u1d High     Flushing connection pool 'Data Source=urasvr139;Initial Catalog=SharePoint_Config;Integrated Security=True;Enlist=False;Connect Timeout=15' 
    11/30/2011 12:14:53.78  WebAnalyticsService.exe (0x06D4)         0x2D24 SharePoint Foundation          Topology                     
     2myf Medium   Enabling the configuration filesystem and memory caches. 
    11/30/2011 12:14:53.79  WebAnalyticsService.exe (0x06D4)         0x12AC SharePoint Foundation          Database                     
     8u1d High     Flushing connection pool 'Data Source=urasvr139;Initial Catalog=SharePoint_Config;Integrated Security=True;Enlist=False;Connect Timeout=15' 
    11/30/2011 12:14:53.79  WebAnalyticsService.exe (0x06D4)         0x12AC SharePoint Foundation          Topology                     
     2myf Medium   Enabling the configuration filesystem and memory caches. 
    11/30/2011 12:14:55.54  mssearch.exe (0x0864)                    0x2B24 SharePoint Server Search       Propagation Manager          
     fo2s Medium   [3b3-c-0 An] aborting all propagation tasks and propagation-owned transactions after waiting 300 seconds (0 indexes)  [indexpropagator.cxx:1607]  d:\office\source\search\native\ytrip\tripoli\propagation\indexpropagator.cxx 
    11/30/2011 12:14:55.99  OWSTIMER.EXE (0x1DF4)                    0x1994 SharePoint Foundation          Topology                     
     75dz High     The SPPersistedObject with
    Name User Profile Service Application, Id 9577a6aa-33ec-498e-b198-56651b53bf27, Parent 13e1ef7d-40c2-4bcb-906c-a080866ca9bd failed to initialize with the following error: System.SystemException: The trust relationship between the primary domain and the trusted
    domain failed.       at System.Security.Principal.SecurityIdentifier.TranslateToNTAccounts(IdentityReferenceCollection sourceSids, Boolean& someFailed)     at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection
    sourceSids, Type targetType, Boolean forceSuccess)     at System.Security.Principal.SecurityIdentifier.Translate(Type targetType)     at Microsoft.SharePoint.Administration.SPAce`1.get_PrincipalName()    
    at Microsoft.SharePoint.Administration.SPAcl`1.Add(String princip... 
    11/30/2011 12:14:55.99* OWSTIMER.EXE (0x1DF4)                    0x1994 SharePoint Foundation          Topology                     
     75dz High     ...alName, String displayName, Byte[] securityIdentifier, T grantRightsMask, T denyRightsMask)     at Microsoft.SharePoint.Administration.SPAcl`1..ctor(String persistedAcl)    
    at Microsoft.SharePoint.Administration.SPServiceApplication.OnDeserialization()     at Microsoft.SharePoint.Administration.SPIisWebServiceApplication.OnDeserialization()     at Microsoft.SharePoint.Administration.SPPersistedObject.Initialize(ISPPersistedStoreProvider
    persistedStoreProvider, Guid id, Guid parentId, String name, SPObjectStatus status, Int64 version, XmlDocument state) 
    11/30/2011 12:14:56.00  OWSTIMER.EXE (0x1DF4)                    0x1994 SharePoint Foundation          Topology                     
     8xqx High     Exception in RefreshCache. Exception message :The trust relationship between the primary domain and the trusted domain failed.   
    11/30/2011 12:14:56.00  OWSTIMER.EXE (0x1DF4)                    0x1994 SharePoint Foundation          Timer                        
     2n2p Monitorable The following error occured while trying to initialize the timer: System.SystemException: The trust relationship between the primary domain and the trusted domain failed.       at System.Security.Principal.SecurityIdentifier.TranslateToNTAccounts(IdentityReferenceCollection
    sourceSids, Boolean& someFailed)     at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess)     at System.Security.Principal.SecurityIdentifier.Translate(Type
    targetType)     at Microsoft.SharePoint.Administration.SPAce`1.get_PrincipalName()     at Microsoft.SharePoint.Administration.SPAcl`1.Add(String principalName, String displayName, Byte[] securityIdentifier, T grantRightsMask,
    T denyRightsMask)     at Microsoft.SharePoint.Administrati... 
    11/30/2011 12:14:56.00* OWSTIMER.EXE (0x1DF4)                    0x1994 SharePoint Foundation          Timer                        
     2n2p Monitorable ...on.SPAcl`1..ctor(String persistedAcl)     at Microsoft.SharePoint.Administration.SPServiceApplication.OnDeserialization()     at Microsoft.SharePoint.Administration.SPIisWebServiceApplication.OnDeserialization()    
    at Microsoft.SharePoint.Administration.SPPersistedObject.Initialize(ISPPersistedStoreProvider persistedStoreProvider, Guid id, Guid parentId, String name, SPObjectStatus status, Int64 version, XmlDocument state)     at Microsoft.SharePoint.Administration.SPConfigurationDatabase.GetObject(Guid
    id, Guid parentId, Guid type, String name, SPObjectStatus status, Byte[] versionBuffer, String xml)     at Microsoft.SharePoint.Administration.SPConfigurationDatabase.GetObject(SqlDataReader dr)     at Microsoft.SharePoint.Administration.SPConfigurationDatabase.RefreshCache(Int64
    currentVe...
    Please guide me on the above issue ,this will be of great help
    Thanks.

    I have same error. Verified for trust , ports , cleaned up cache.. nothing has helped. 
    The problem is caused by User profile Synch Service:
    UserProfileProperty_WCFLogging :: ProfilePropertyService.GetProfileProperties Exception: System.SystemException:
    The trust relationship between the primary domain and the trusted domain failed.       at System.Security.Principal.SecurityIdentifier.TranslateToNTAccounts(IdentityReferenceCollection sourceSids,
    Boolean& someFailed)     at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess)     at System.Security.Principal.SecurityIdentifier.Translate(Type
    targetType)     at Microsoft.SharePoint.Administration.SPAce`1.get_PrincipalName()     at Microsoft.SharePoint.Administration.SPAcl`1.Add(String principalName, String displayName, SPIdentifierType identifierType, Byte[]
    identifier, T grantRightsMask, T denyRigh...        
    08/23/2014 13:00:20.96*        w3wp.exe (0x2204)                      
            0x293C        SharePoint Portal Server              User Profiles                
            eh0u        Unexpected        ...tsMask)     at Microsoft.SharePoint.Administration.SPAcl`1..ctor(String persistedAcl)    
    at Microsoft.Office.Server.Administration.UserProfileApplication.get_SerializedAdministratorAcl()     at Microsoft.Office.Server.Administration.UserProfileApplication.GetProperties()     at Microsoft.Office.Server.UserProfiles.ProfilePropertyService.GetProfileProperties()
    Please let me know if you any solution found for this?
    Regards,
    Kunal  

  • Training on CalcScripts, Reporting Scritps, MaxL and Data Loading

    Hi All
    I am new to this forum. I am looking for someone who can train me on topics like CalcSripts, Reporting Scripts, MaxL and Data Loading.
    I am willing to pay for your time. Please let me know.
    Thanks

    Hi Friend,
    As you seems to be new to essbase,you must learn What is Essbase, OLAP, Difference between Dense & Sparse, and then use "essbase tech ref" for more reference
    After that go through
    https://blogs.oracle.com/HyperionPlanning/and start exploring CalcScript, Maxl etc
    and all this for you free free free..........
    Thanks,
    Avneet

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • Error while adding a used relationship between the New DC and the Web DC

    Hi Gurus
    We are getting the Error in NWDS while Adding  a used relationship between the New DC and the Web DC.
    Steps what we are Done
    1. Create the custom project from inactiveDC's
    2.creating the project for the component crm/b2b in SHRAPP_1
    3.After that we changed the application.xml and given the contect path.
    4.Then we tried to add Dependency to the custom create DC we are getting the error saying that illegal deppendency : the compartment sap.com_CUSTCRMPRJ_1 of DC sap.com/home/b2b_xyz(sap.com.CUSTCRMPRJ_1) must explicitly use compartment sap.com_SAP-SHRWEB_1 of DC sap.com/crm/isa/ like that it was throwing the error.
    so, we skip this step and tried to create the build then it is saying that build is failed..
    Please help us in this regard.
    Awaiting for ur quick response...
    Regards
    Satish

    Hi
    Please Ignore my above message.
    Thanks for ur Response.
    After ur valuble inputs we have added the required dependencies and sucessfully created the projects, then building of the  projects was also sucessfully done and  EAR file was created.
    We need to deploy this EAR file in CRM Application Server by using the interface NWDI.
    For Deploying the EAR into NWDI, we need to check-in the activites what i have created for EAR. once i check-in the activites ,the NWDI will deploy the EAR into CRM Application Server.
    In the Activity Log we are able to check the Activities as Suceeded but the Deployment column is not showing any status.
    When i  right click on my activity Id the deployment summery is also disabled.
    So finally my Question is that where can i get the deployment log file, and where can i check the deployment status for my application..
    Any pointers in this regard would be of great help..
    Awaiting for ur valuble Responses..
    Regards
    Satish

  • EIS Member and Data Load-Getting OS Error-Please help!

    Please help! I have created a OLAP model then created a Metaoutline.
    Then I went ahead to do the Member and Data load. I logged into my server and started the member and data load.
    Then it gives me the following error:
    SELECT /*+ */ .. FROM <my_view_name>
    OS Error No such file or directory IS Error Member load terminated with error.
    The load terminated with errors.
    Thanks in advance for any replies.

    thanks all! the error has been resolved.
    Jus had to create a directory in the Integration services folder: $ISHOME/loadinfo
    the loadinfo folder was missing.
    Prathap,
    Is that view available at that time? --the query is generated automatically.
    Which Data Source and which version of the Hyperion - datasource is Oracle10g and 9.3 is Hyp version.

  • What is the relationship between the SAP BC and XI?

    Hi Friends,
    What is the relationship between the SAP BC and XI?
    Regards
    Sam

    Hi Samuel Melvin ,
    The SAP BC is basically WebMethod's product supplied free of charge to SAP customers.
    SAP XI is a purchased product from SAP that can be used to integrate multiple systems, you would need to contact your SAP Account rep to discuss licensing issues. You cannot download it for free from sapnet.
    It is probably more relevant to discuss the comparison between these solutions based on a specific scenario, but here are some basic differences. Obviously, the foremost one being that the SAP Exchange Infrastructure (XI) belongs to the SAP Netweaver technology suite, whereas Business Connector although bundled by SAP is really a Integration tool provided by Web Methods.
    Being a Netweaver solution the SAP XI is central in design and configuration. The SAP XI is really integrated as a required solution for some of new mySAP solutions like SRM. For example, the mySAP SRM business scenario for Supplier Self Services in EBP requires the implementation of SAP XI, the Business connector cannot be utilized to replace this scenario. Although the SAP Business Connector can still be utilized with SRM for all XML based communication. Both mySAP SRM and XI runs natively on the SAP WebAS Server.
    The Business Connector is a point to point solution that provides messaging and routing. Currently a number of organizations using the Business Connector utilize it to communicate between its partners using XML based messaging. This can be achieved using SAP XI as well, but SAP's strategy is to utilize XI as a central hub for messaging, routing, and communication between all SAP, non-SAP systems as well as partners. As SAP XI is a relatively new solution, organizations that are looking for communicating with their partners to exchange XML based documents or other point-to-point scenarios can still utilize SAP Business connector.
    Initially, when SAP announced the SAP XI, there was a notion that SAP Business Connector would be phased out and SAP XI would be the solution utilized hereon. But SAP will continue to support the Business Connector, the SAP Exchange Infrastructure will remain as the SAP's strategic integration solution. All new and or updated SAP solutions will make use of the XI for all process-centric, message based integration. As upgrades are scheduled and new solutions deployed we'll see the mandated need for XI but currently the only solution that natively requires SAP XI is mySAP SRM 2.0.
    It will be interesting to see how organizations that already utilize SAP Business Connector warm up the SAP XI idea. But for the most part they will not have a choice really, as the new mySAP products are going to natively require the use of SAP XI for all process-centric integration between the different SAP solutions. But there is no reason why organizations cannot continue to utilize the SAP Business Connector for all point-to-point solutions already deployed with possibly document exchange and partner integration. Hope this helps.
    These web-sites may help u in understanding XI & BC
    Understanding SAP XI SEEBURGER
    http://www.seeburger.com/fileadmin/com/pdf/SAP_Exchange_Infrastructure_Integratio_Strategy.pdf
    http://www.t2b.ch/Flyers/t2b%20SAP%20Xi%20Flyer.pdf
    SAP Exchange Infrastructure (BC-XI)
    http://help.sap.com/saphelp_srm30/helpdata/en/0f/80243b4a66ae0ce10000000a11402f/content.htm
    webMethods for SAP
    http://www1.webmethods.com/PDF/webMethods_for_SAP-wp.pdf
    http://help.sap.com/saphelp_srm30/helpdata/en/72/0fe1385bed2815e10000000a114084/content.htm
    cheers!
    gyanaraj
    ****Reward points if u find this helpful

  • Relationship between TRANSACTION DOCUMENT Nu00BA and REQUEST/TASK

    Dear Gurus
    I've created  an change request - urgent correction and this assined to ID (Transaction Document Nº) = for example 8000001285.
    The next step is create the Request/Task relationship with this ID.
    I need to generate report that have relationship between Transaction Document Nº and Request/Task, but I need to know the tables that save this information.
    Could you please help me?
    LASAM

    Thanks,
    SAP is working to make the two technologies comparable, as we can see also from products like [SAP Application Interface Framework|http://help.sap.com/saphelp_aif10/helpdata/en/5a/3eacf824e74542abbd2271238dc70b/frameset.htm] so I ask to the community if somebody find how the linking between the business document and ABAP proxies is implemented
    Regards

  • Relationship between a Sales Order and Purchase Order

    Dear All,
    Can we create a relationship between a Sales Order and Purchase Order.? If yes, then how can we do that.
    or how can we relate supply and demand..
    Please update...
    many thanks in advance...

    It's easy
    refere to my blog for complete Query between
    OM - Req and POs.
    http://eoracleapps.blogspot.com/2009/04/oracle-order-to-cash-queries.html

  • Relationship between Dynamic Memory Heap and Heap Data Structure

    This question is not strictly related to Java, but rather to programming in general, and I tend to get better answers from this community than any where else.
    Somehow, my school and industry experience have somehow not given me the opportunity to explore and understand heaps (the data structure), so I'm investigating them now, and in particular, I've been looking at applications. I know they can be used for priority queues, heap sorts, and shortest path searches. However, I would have thought that, obviously, there must be some sort of relationship between the heap data structure, and the dynamic memory heap. Otherwise, I can think of no good reason why the dynamic memory heap would be named "heap". Surprisingly, after searching the web for 90 minutes or so, I've seen vague references, but nothing conclusive (trouble seems to be that it's hard to get Google to understand that I'm using the word "heap" in two different contexts, and similarly, it would not likely understand that web authors would use the word in two different contexts).
    The Java Virtual Machine Spec is silent on the subject, as "The Java virtual machine assumes no particular type of automatic storage management system, and the storage management technique may be chosen according to the implementor's system requirements."
    I've seen things like:
    [of dynamic memory] "All the blocks of a particular size are kept in a sorted linked list or tree (I extrapolate that sorted tree could imply heap)"
    [of dynamic memory] "The free and reserved areas of memory are maintained in a data structure similar to binary trees called a heap"
    [of dynamic memory] "This is not related to the heap data structure"
    [of dynamic memory] "Not to be confused with the data structure known as a "heap"
    [of data structure] "Not to be confused with the dynamic memory pool, often known as TheHeap"
    At this point, I've come to surmise that some (but not all) memory management algorithms use heaps to track which (pages? blocks? bytes?) of memory are used, and which are not. However, the point of a heap is to store data so that the max (or min) key is at the root of the heap. But we might want to allocate memory of different sizes at different times, so it wouldn't make sense to key on the amount of available memory in a particular region of the free store.
    I must assume then that there would be a different heap maintained for each size of memory block that can be allocated, and the key must have something to do with the attractiveness of the particular memory block in the heap (perhaps the lowest address, resulting, hopefully, in growing the free store space less often, leaving more space for the stack to grow, or perhaps keyed based on the fragmentation, to hopefully result in less fragmentation, and therefore more efficient use of the memory space, or perhaps based on page boundaries, keeping as much data in the same page as possible, etc).
    So at this point, I have a few questions I've been unable to resolve completely:
    1. Am I correct that the heap was so named because (perhaps at one point in time), a heap is/was commonly used to track the available memory in the free store?
    2. If so, would it be correct that there would be a heap per standard block size?
    3. Also, at what level of granularity would a heap typically be used (memory page, memory blocks, individual words (4-bytes))?
    4. What would be the most likely property one would use as a key. That is, what makes the root item on the heap ideal?
    5. Would a industrial strength system like the jvm use a (perhaps modified or tuned) heap for this sort of task, or would this typically be too naive for an real world solution today?
    Any insight would be awesome!
    Thanks,
    A.

    jschell wrote:
    I think you are not only mixing terms but domains.
    For starters the OS allocs memory. Applications, regardless of language, request memory from the OS and use it in various ways.
    There are many variations of the term "heap" like the following.
    [http://en.wikipedia.org/wiki/Heap_(data_structure)]
    [http://en.wikipedia.org/wiki/Dynamic_memory_allocation]
    A java VM will request memory from the OS (from a 'heap') and use it in its application 'heap' (C/C++) and then create the Java 'heap'. There can be variations of that along the way that can and likely will include variations of how each heap is used, potentially code that creates its own heap, and potentially other allocators which use something which is not a heap.This last part, I find a bit confusing. By "use something which is not a heap", do you mean the heap data structure, or the dynamic memory pool meaning of heap? If the former, then you would be implying that it would be common for a heap data structure to be used to manage the heap dynamic memory pool. If the latter, what would this "something which is not a heap" be? The best definition of "heap" I've found simply states that it is a pool of memory that can be dynamically allocated. If there is some other way of allocating dynamic memory, then it would suggest that the previous definition of "heap" is incomplete.
    >
    So to terms.
    1. Am I correct that the heap was so named because (perhaps at one point in time), a heap is/was commonly used to track the available memory in the free store?Which 'heap'? The VM one? It is probably named that because the implementors of the Sun VM were familar with how C++ and Smalltalk allocated memory.Okay, but that begs the question, was the heap in C++ and/or Smalltalk so named for the above queried reason?
    >
    2. If so, would it be correct that there would be a heap per standard block size?Not sure what you are referring to but probably a detail of the implementation. And since there are different levels the question doesn't mean much.
    However OS allocations are always by block if that helps. After that it requires making the question much, much more specific.
    3. Also, at what level of granularity would a heap typically be used (memory page, memory blocks, individual words (4-bytes))?Again not specific enough. A typical standard implementation of heap could not be at the word level. And it is unlikely, but not impossible, that variations would support word size allocations.
    The VM heap might use word boundaries (but not size), where the application heap certainly does (word boundary.)My understanding of it is that the application would request blocks from the OS, and then something like malloc would manage the memory within the allocated blocks. malloc (or whatever equivalent Java uses) would have to keep track of the memory it has allocated somehow, and I would think it would have to do this at the word level, since it's most commonly going to allocate memory at the word level to be references to other objects, etc.
    So I guess my question here would really be, if the dynamic memory heap is so named because there has been a memory management strategy that relied upon a heap data structure (which I've found no proof, but have found some suggestive literature), then would that probably have applied at the OS Page Fault level, tracking allocated blocks, or would that have applied at the malloc level, allocating individual words as necessary?
    >
    4. What would be the most likely property one would use as a key. That is, what makes the root item on the heap ideal?"Key" is not a term that will apply in this discussion.
    You appear to be referring to strategies for effective allocation of memory such as allocations from different regions by size comparison.
    It is possible that all levels might use such an allocator. General purpose applications do not sort allocations though (as per your one reference that mentions 'key'.) Sorry, I got the term "key" from an article I read regarding heaps, that indicates that a "key" is used to sort the elements, which I guess would be a more generalized way to make a heap than assuming a natural ordering on the elements in the heap. I'm not sure if the terminology is standard.
    >
    5. Would a industrial strength system like the jvm use a (perhaps modified or tuned) heap for this sort of task, or would this typically be too naive for an real world solution today?Again too indefinite. The Sun VM uses a rather complicated allocator, the model for which originated after years of proceeding research certainly in Smalltalk and in Lisp as well, both commercially and academically.
    I am sure the default is rules driven either explicitly or implicitly as well. So it is self tuning.
    There are command line options that allow you to change how it works as well.I guess perhaps I could attempt to clarify my initial question a bit.
    There is a 1:1 correspondence between the runtime stack, and a stack data structure. That is, when you call a function, it pushes a stack frame onto the runtime stack. When you return from a function, it pops a stack frame from the runtime stack. This is almost certainly the reasons the runtime stack is named as it is.
    The question is, is there or has there ever been a 1:1 correspondence between some aspect of the dynamic memory heap or how it is managed, and a heap data structure? If so, it would explain the name, but I'm a bit puzzled as to how a heap data structure would be of assistance in creating or managing the dynamic memory heap. If not, on the other hand, then does anybody know where the name "heap" came from, as it applies to the dynamic memory pool?
    A.

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How to display metadata such as data load date in answers report title?

    We have a requirement to display the last load date of the data relevant to the report the user is viewing. We have such information stored in a metadata table listed by the fact table the report is referencing. Our proposed solution is to create new answers reports off of this metadata table and put each report (with the appropriate filter on the fact table) on each dashboard section where the corresponding report is placed. One problem with this approach is the load date information will not be reflected in the print form of the report as the date is dashboard content - not report content. Is there any way to overcome this situation (other than create a ton of variables specifically created for this purpose)? I'm open to entertaining javascript ideas, if necessary. I would love to know how to push this OBIEE envelope further. Thanks in advance.

    Hi,
    I discuss with some people who are familiar with SharePoint, we both thought Windows Explorer may
    not accept the custom metadata.
    if we want to do some customization, it is recommended to ask for help in development forum.
    http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/home?category=windowsdesktopdev
    If you have any feedback on our support, please click
    here
    Alex Zhao
    TechNet Community Support

  • Creation of Relationship between an Org Unit and ROSTER

    Hi All,
    I have Done All the Back end Settings For Roster.However when i am trying to Create relationship between Roster and an Org unit,there is no avaibility Of Rosters.Can Someone Please Guide me Out.
    Thanks,
    Punam

    HI,
    There are standard  relationship for Rosters (91 ,92 ,93 and 94),
    but befor doing all the configuration you have to activate switch for roster
    Go to transaction SHDG
    Choose the path Application Components-> payroll->India->India public sector.
    Click on change Global fields Icon and add the value as u2018Xu2019 for data element u201CPIN_SW_INPSu201D.
    Activate the entry
    Warm Regards,
    Kapil Kaushal

Maybe you are looking for

  • Macbook Pro 2011 - running slow after installing Yosemite

    Can anyone help me - my late 2011 Macbook Pro is running very slow since installing Yosemite.  Also my Microsoft Office for Excel is not allowing me to type into cells - I can copy and paste but not add new text. I have run the Etre diagnostic app (a

  • New Iphoto wont open after downloading Yosemite

    I updated my 2010 Macbook Pro with Yosemite. Then I downloaded the new Iphoto from the App Store. The new iphoto is completely unresponsive and my old iphoto has a white circle with a line through it. Help!

  • Has anyone found a way to keep Verizon Email with old dial-up service

    When I moved a few years ago I cancelled my version DSL and went to a low cost monthly dial up service, so I could keep my email. The monthly fee has now gone up a whopping $10.00 extra per month. I was told to get a Verizon domain service, but found

  • Intercompany billing between 3 companies

    Hi SD gurus i am facing problem in configuring a unique scenario i.e. intercompany billing between three companies. I have to configure the below scenario in SAP : Company code A gets an order from Customer, product delivery is from company code B to

  • POSDM Full Load

    Hi Gurus, I'm in BI 7.0 and I load POSDM cubes with delta extraction, this is the only way I know to load data from POSDM. If i want to load old data from POSDM, what is the way to do it? If I delete my delta init, I only can load the new data... Ano