LOAD process questions
Hi Gurus,
I have to load an ODS1 from another ODS2. I want to delta enable the future loads.......I assume that the steps are to first INITIALIZE AND then do the deltas. When i right click ODS2 to update ODS data into data targets it gives me 3 options, "Full Update"- which i dont want to do, "Initial Update" and "Fill data Targets with Inits/Deltas Initial". Which one do i choose among the two???? After this steps how do i do the delta updates??? how will i integrate the delta process into a process chain??? where do i find the info package for an ODS to ODS load????
Thank You
Southie
Hi,
When you are loading data from ods1 to ods2 with delta options follow the below sequence.
1.Identify the exported data source/ Info source
Example:8ODS1 is the infosource/data source name.
2.Some times you can not see that infosource in Infosource tree,that time in menu bar you choose settings > display generated objects option,then you can find that infosource/data sources.
3. Create info pak under IS->Source system-> with Init and schedule it.
4. Create another infopak with delta update and schedule it,if you want to enable delta in process chain add this infopak in process chain with variant.
5.If you want to load data with full update for ods2,after initialization you create one more info pak and select repair full request in menu bar and schedule it under exported data source/infosource (8ODS1).
Thanks,
G.R.Babu
Message was edited by: Ravindrababu Gogineni
Similar Messages
-
Questions about the load processing of OpenSparc T1 Dcache
Hi,
I have some questions about OpenSparc T1 Dcache load processing.
During load processing, subsequent loads to the same address need to search the store buffer for a valid store to that address. If there is a CAM hit, data is sourced from the store buffer, not from the D-cache, and no load request will be sent to the L2.
What if there is no CAM hit. Would the load request be sent to L2? Or would Dcache be checked for the requested data?
If the load request would be sent to L2, what next? Would the Dcache be updated?
ThanksStore buffer is checked for Read after Write (RAW) condition on loads. If there is full RAW - i.e. full data exists in the store buffer - then the data is bypassed and no D cache access happens.
If RAW is partial (e.g. word store followed by a double word load) then load is treated as a miss. Store is allowed to complete in L2 cache and then load instruction is completed.
For the miss in STB, D cache is accessed. If hit, data is fetched from D$. If miss, data is fetched from L2$ and allocated in D$. -
Data Load process for 0FI_AR_4 failed
Hi!
I am aobut to implement SAP Best practices scenario "Accounts Receivable Analysis".
When I schedule data load process in Dialog immediately for Transaction Data 0FI_AR_4 and check them in Monitor the the status is yellow:
On the top I can see the following information:
12:33:35 (194 from 0 records)
Request still running
Diagnosis
No errors found. The current process has probably not finished yet.
System Response
The ALE inbox of BI is identical to the ALE outbox of the source system
or
the maximum wait time for this request has not yet been exceeded
or
the background job has not yet finished in the source system.
Current status
No Idocs arrived from the source system.
Question:
which acitons can I do to run the loading process succesfully?Hi,
The job is still in progress it seems.
You could monitor the job that was created in R/3 (by copying the technical name in the monitor, appending "BI" to is as prefix, and searching for this in SM37 in R/3).
Keep on eye on ST22 as well if this job is taking too long, as you may have gotten a short dump for it already, and this may not have been reported to the monitor yet.
Regards,
De Villiers -
How to automate the data load process using data load file & task Scheduler
Hi,
I am doing Automated Process to load the data in Hyperion Planning application with the help of data_Load.bat file & Task Scheduler.
I have created Data_Load.bat file but rest of the process i am unable complete.
So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler or what are the rest of the file is require to achieve this.
ThanksTo follow up on your question are you using the maxl scripts for the dataload?
If so I have seen and issue within the batch (ex: load_data.bat) that if you do not have the full maxl script path with a batch when running it through event task scheduler the task will work but the log and/ or error file will not be created. Meaning the batch claims it ran from the task scheduler although it didn't do what you needed it to.
If you are using maxl use this as the batch
"essmsh C:\data\DataLoad.mxl" Or you can also use the full path for the maxl either way works. The only reason I would think that the maxl may then not work is if you do not have the batch updated to call on all the maxl PATH changes or if you need to update your environment variables to correct the essmsh command to work in a command prompt. -
Hi all,
I have a service object (SO1) which has been set to Load Balancing.
This service object has an attribute which serves as a number allocator
(NA1).
This NA1 provides a unique number across the whole application for each of
the record that require to store into DB.
The problem is, will the NA1 get replicated if the SO1 is replicated?
If yes, will NA1 crash?
Regards,
Martin Chan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Senior Analyst/Programmer
Dept of Education and Training
Mobile : 0413-996-116
Email: martin.chandet.nsw.edu.au
Tel: 02-9942-9685Hi Serge,
Could you prefix it with the PID of the load balanced process ?No I can't. At least not at the moment.
When a service object is replicated, it is automatically replicated into adifferent partition...
Thanks.
An advice, make the NA1 shared. So if you get to do multithreaded accessto
it, you won't screw up things.I am thinking it may be better off to create it as a service object on it's
own.
How is the number returned by the NA1 generated ?It gets generated by Forte's code.
... Try to make it so that the
load balanced partitions don't need to access the database more than onein
5 min. to get a new Seed Key. This would not need to PID.Thanks for your advise.
Regards
Martin Chan
-----Original Message-----
From: Serge Blais [mailto:Serge.BlaisSun.com]
Sent: Tuesday, 3 April 2001 14:17
To: Chan, Martin
Subject: RE: (forte-users) SO Load Balancing Question
Your right, they can generate the same number. How much control do you have
over the ID being generated? Could you prefix it with the PID of the load
balanced process ?
Just a note: When a service object is replicated, it is automatically
replicated into a different partition, possibly on the same machine or on a
different one.
An advice, make the NA1 shared. So if you get to do multithreaded access to
it, you won't screw up things.
How is the number returned by the NA1 generated ? If NA1 is using a stored
procedure, or something like:
Start TRX
read number
newnumber = number+5000
write back newnumber
End Trx
Something like will be very safe. The Database Index Table is taking care
of the critical section. Then you can be sure that each replicate can be
independent (not hit into each other) for 5000 iterations. Depending on the
frequency, you may want to up this number or lower this number. Too high it
would make the key very high very soon with wholes in the sequence. Too low
and you would have hit between the replicates. Try to make it so that the
load balanced partitions don't need to access the database more than one in
5 min. to get a new Seed Key. This would not need to PID.
Serge
At 01:59 PM 4/3/2001 +1000, you wrote:
Hi Serge,
The number return by the NA1 is used as a primary key for each of therecord
that stores in the DB.
The Number Allocator NA1 is required to access to DB to update an ID table
which carry the next available sequence number. NA1 will only update this
table for every 5000 records.
For example, the initial value of the sequence is: 1
The next update will change the value to 5001, next will be 10001 and soon.
>
The properties of this NA1 class at runtime
Shared - Disallowed
Distributed - Disallowed
Transactional - Is Default
Monitored - Disallowed
Unfortunately, this attribute is not a handle but is instantiated by theSO1
itself.
I have been thinking, if SO1 is replicated within the same partition, and
each replicate will carry its own NA1. NA1 and the replicate of NA1 may
return a same number if their initial values of the sequence are the same.
Correct?
Regards
Martin Chan
-----Original Message-----
From: Serge Blais [mailto:Serge.BlaisSun.com]
Sent: Tuesday, 3 April 2001 13:11
To: Chan, Martin; forte-userslists.xpedior.com
Subject: Re: (forte-users) SO Load Balancing Question
Let's see if I understand right.
You have a service object that keep a handle to an object that either keep
state information, or that generate state information. Now the thing to
figure out is which is it. Let's assume that NA1 is a number generator,
that does not need to be synchronized or that doesn't need to access any
external resource. It would still work, depending on the algorythm you are
using.
Will they share the same NA1? It depends on the nature of NA1, but for sure
NA1 would have to be an anchored object. An if multiple partitions would
share the same object "only" for key generation, you would bring down your
performance on key generation or key update (by adding one inter-process
call).
In short:
1. Many scenarios can happen, you need to be clearer on your description.
2. If you are sharing an object by load balanced partitions, this greatly
reduce the gain of load balancing the partition.
3. If NA1 is keeping state, any access to it would need to be controlled
"shared".
Have fun now...
Serge
At 12:30 PM 4/3/2001 +1000, Chan, Martin wrote:
Hi all,
I have a service object (SO1) which has been set to Load Balancing.
This service object has an attribute which serves as a number allocator
(NA1).
This NA1 provides a unique number across the whole application for each
of
the record that require to store into DB.
The problem is, will the NA1 get replicated if the SO1 is replicated?
If yes, will NA1 crash?
Regards,
Martin Chan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Senior Analyst/Programmer
Dept of Education and Training
Mobile : 0413-996-116
Email: martin.chandet.nsw.edu.au
Tel: 02-9942-9685
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comSerge Blais
Professional Services Engineer
iPlanet Expertise Center
Sun Professional Services
Cell : (514) 234-4110
Serge.BlaisSun.comSerge Blais
Professional Services Engineer
iPlanet Expertise Center
Sun Professional Services
Cell : (514) 234-4110
Serge.BlaisSun.com -
WAAS Application Requests - Process Question
Non Technical Process Question
We all have forms we have our users complete when a firewall rule or change is needed. You may even have a similiar documents for when load balancers or DNS changes are required. Does anyone have document they can share that outlines what pieces of information are needed for intergrating applications into WAAS? What about ongoing changes?
Source, destination and TCP port information is really a very small portion of the what needed to maintain a clean a defined methodolgy within the WAAS manager. Does anyone have an example or can describe how you collect the initial information to set up WAAS but how do you keep track of changes that may be needed as the application charaterics change or the server farm expands horizontally?
Thanks - SamSam,
the general answer for detailed information on how to configure WAAS for certain applications is described here:
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v501/configuration/guide/policy.html
In general, WAAS comes preconfigured for the most widly used applications in the industry.
In order to understand, which configuration is necessary for a new application, one needs to understand the basic options WAAS offers.
These are two options:
1) Use an Application Optimizer (AO) if you need some dedicated protocol know how. ( e.g. (e)MAPI, CIFS, SSL, HTTP, ICA, to name some)
2) Generic TCP traffic is optimized using TCP Flow Optimization (TFO), Data Redundancy Elemination (DRE) and Lempe Liv (LZ) compression.
The in terms of processes, the question is:
1) Is there a policy preconfigured for the new application?
a) yes
If an AO is used, does the AO need configuration? ( example: SSL AO requires certificates)
Do we have specialities which require further fine tuning? ( Answer is mostly: no, Example: non-standard ports)
b) no
You define what you need for the application based on the protocol characteristics.
If you have defined these characteristics you can choose one of the AOs, or define which of the "generic" options fit the traffic. For example, for traffic that is already compressed, it does not much benefit from LZ, so choosing only TFO and DRE. Another example is traffic that has not much of dublicate data, perhaps it does not benefit a lot from DRE, so you configure TFO only.
Does that answer your question?
Thanks,
chris -
What would make my Cp7 course get hung up during the loading process when launching on LMS?
In Cp7, I created a SCORM project to post on our LMS and submitted it for testing. Our LMS department confirmed it worked successfully, and in fact I tested it myself – it worked great. However, my work team asked me to make some changes to the content before pushing it out of the testing phase to launch company-wide. Per company policy, making changes to a course means resubmitting it once more to be tested before making it available to everybody. So I made the requested changes and resubmitted it for testing. Now our LMS department reports the course will not launch properly – it gets stuck in the loading process. It shows “Loading...” endlessly but never loads.
When I submitted the updated version for testing, I kept all the project settings that worked successfully the first time. The changes I made to the project were:
I added a slide toward the beginning (slide 2) to give the user navigation tips.
I set slides so that each one must play all the way through before the user can proceed to the next slide (I think I did this simply by removing “play” on the playbar”).
Under table of contents settings, I checked “navigate visited slides only,” so the user can navigate backwards using the contents bar at left, but can only navigate forward to slides that have already played or to the next slide in the queue.
I broke up a couple of lengthy multiple choice questions into shorter ones (for an additional two slides).
Is there any reason one of these changes would make the course get hung up during loading?
Is there a size limit Cp7 projects which, if exceeded, might be causing such an issue?
Or does anyone have ideas about what else might be making it get hung up in the loading process?
Thank you, any and all, for your feedback.back up the iPhoto library like any other backup - make a copy of the iPhoto library in case of problems
you Depress the option (alt) and command keys and launch iPhoto - anyplace you can launch iPhoto you do this - keep the keys down until you get the rebuild window
LN -
Hi,
How do you model a load process in BPM 11g using the new BPMN palette. The load process queries an external oracle database table and creates tasks for the end users in the workspace. Each task has a user interface that will display the data passed in.
thanks.OracleStudent,
I am not going to recommned to fiddle with your redo log size and that will be my last option if I have to.
number of record = 8413427
Txt fiel size = 3.59GB
columns = 91
you said remote server, is that the case? i've have no idea could you tell mean what is meaning of remote server?? plz tell me how can i check this???? i've recenly join this campany i asked to developer who show me the code where he is using direct = ture. plz help me this process of loading is very annoying for me. plz tell what i need to check
Couple of questions.
How are you loading this data? You mentioend using some .NET application my question, is this .NET applicaiton resides on the same server as your database or does it run from a different machine. Also if you are invoking sqlldr (as you mentioned), please post your sqlldr control file. Also during the load it should be generating a log file , check and look for following line to verify and confirm you are using direct path.
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct2. Do you have any indexes on this table, if yes how many and what type? I mean regular btree or bitmap or both?
3. Does this table in logging or nologging state?
Regards -
Loading data into essbase - can't stop the loading process
Hello Everyone
I have built an interface which loads data into essbase.
The interface was working great until last night, don't know why it got stuck in the stage of loading data into essbase
(This is my bonus question, why? does it have anything to do wit hte fact that I have refreshed the database via planning)
Any way, we have tried to kill the loading process by clicking stop on the interface execution in operator, and even after we killed the
session in essbase it was still there in essbase.
Does anyone know why?
ThanksHi,
I can't answer your questions but ...
I'm really familiar with the fact that trying to stop an execution fail.
The only solution I found is to pass by an agent for all my executions.
With an agent if I have a problem, as an emergency I can stop the agent it will stop the execution...
I'm sure that it is a bit dirty but it works...
And that is better than nothing...
The ugliest and only way I found without agent to stop an execution was to drop a table used by the job (like a I$ table) it forces the failed...
Sorry no more clue.
Regards,
Brice -
Optimize the data load process into BPC Cubes on BW
Hello Gurus,
We like to know how to optimize the data load process and our scenario for this is that we have ECC Classic Ledger, and we are looking for the best way to load data into the BW Infocubes from an ECC source.
To complement the question above, from what tables the data must be extracted and then parsed to BW so the consolidation it´s done ? also, is there any other module that has to be considered from other modules like FI or EC-CS for this?
Best Regards,
RodrigoHi Rodrigo,
Have you looked at the BW Business Content extractors available for the classic GL? If not, I suggest you take a look. BW business content provides all the business logic you will normally need to get data out of ECC and into BW for pretty much every ECC application component in existence: [http://help.sap.com/saphelp_nw70/helpdata/en/17/cdfb637ca5436fa07f1fdc0123aaf8/frameset.htm]
Ethan -
Repository corrupted/loading process is taking long time
The repository load process is getting stuck at the below message for long time. We have disabled sort indexes on all look up tables but still it is taking huge amount of time to load the repository.Any information on this would be of great help.
94 2011/08/26 20:16:55.523 Report Info Background_Thread@Accelerator Preload.cpp Processing sort indices for 'Products'... (98%)
81 2011/08/27 02:08:23.298 Report Info Background_Thread@Accelerator Preload.cpp Processing sort indices for 'Products'... (100%)
Regards,
NitinHi Nitin,
There are many performance improvement steps that one can take regarding this including verifying your repository.
But i think it should only be a problem if this problem reoccurs.
The accelerators get created when there is a change in table and update indices creates them,possibly there are multiple changes and load by update indices has not taken place for sometme thats why it is creating them.
For better performance one can do following:
Make a judicious choice of which fields to track in change tracking
Disk Space has a huge impact on smooth functioning of MDM. If the disk space is not enough MDM Server cannot even load the repositories
Have a closer look at data model and field properties
Verify if your MDS.INI has this parameter: Session Timeout Minutes (Number. Causes MDM Console, CLIX, and applications based on the new Java API to expire after the specified number of minutes elapses. Default is 14400 (24 hours).When set to 0, sessions never time out.).When you have many open connections, this can to generate performance issue on MDM server
Refer to SAP Note Number: 1012745
Hope this helps.
Thanks,
Ravi -
Process Chain Load Processing Time Issue
Hi All,
One my process Chain is running daily , but after 2 hours of load it is showing Red in the Monitor screen, but after 5 hours load is successful.
Why Monitor screen is showing Red?
Is it possible to Extend the load processing time in the Infopackage level?
for e.g- if they set as 60 seconds -- go to Red
I want to change to 120 seconds -- go to Red.
If yes...where we can do it ...please let me know the steps...
Regards,
Nithi.hey hi,
double click on the infopackage -> go to "scheduler" on the top left corner of the menu -> click "Timeout time" and you have the different option to change it.
hope this helps. -
When trying to download apps on my new iPad I keep getting prompted to update my security questions for my safety. When I choose this option it freezes and won't load the questions, however, when I hit not now it won't let me download. What do I do?
Reboot your iPad and then see if you can set the security questions.
Reboot the iPad by holding down on the sleep and home buttons at the same time for about 10-15 seconds until the Apple Logo appears - ignore the red slider - let go of the buttons. -
Btree vs Bitmap. Optimizing load process in Data Warehouse.
Hi,
I'm working on fine tuning a Data Warehousing system. I understand that Bitmap indexes are very good for OLAP systems, especially if the cardinality is low and if the WHERE clause has multiple fields on which bitmap indexes exist for each field.
However, what I'm finetuning is not query, but load process. I want to minimize the total load time. If I create a bitmap index on a field with cardinality of one million, and if the table has one million rows (each row has a distinct field value), then my understanding is
The total size of the bitmap index = number of rows * (cardinality / 8) bytes
(because there are 8 bits in a byte).
Hence the size of my bitmap index will be
Million * Million / 8 bytes = 116 GB.
Also, does anyone know what would be the size of my B-tree index? I'm thinking
The total size of the B-tree index = number of rows * (field length+20) bytes
(assuming that the field length of rowid is 20 charas).
Hence the size of my b-tree index will be
Million * (10+20) bytes = 0.03 GB (assuming that my field length is 10 charas).
That means B-tree index is much lesser than the size of the Bitmap index.
Is my math correct? If so, then the disk activity will be much higher for a bitmap index than a B-tree index. Hence, creation of the bitmap index should take much longer than the B-tree index if the cardinality is high.
Please let me know your opinions.
Thanks
SankarHi Jaffar,
Thanks to you and Jonathan. This is the kind of answer I have been looking for.
If I understand your email correctly, for the scenario from my original email, bitmap index will be 32MB where as Btree will be 23MB. Is that right?
Suppose there is an order table with 10 orders. There are four possible values for OrderType. Based on your reply, now I understand that the bitmap index is organized as shown below.
Data Table:
RowId OrderNo OrderType
1 23456 A
2 23457 A
3 23458 B
4 23459 C
5 23460 C
6 23461 C
7 23462 B
8 23463 B
9 23464 D
10 23465 A
Index table:
OrderType FROM TO
A 1 2
B 3 3
C 4 6
B 7 8
D 9 9
A 10 10
That means, you might have more entries in the index table than the cardinality. Is that right? That means, the size of the index table cannot be EXACTLY determined based on cardinality. In our example, the cardinality is 4 while there are 6 entries in the index table.
In an extreme example, if no two adjacent records have the same OrderType, then there will be 10 records in the index table as well, as shown in the example below.
Data Table (second example):
RowId OrderNo OrderType
1 23456 A
2 23457 B
3 23458 C
4 23459 D
5 23460 A
6 23461 B
7 23462 C
8 23463 D
9 23464 A
10 23465 B
Index table (second example):
OrderType FROM TO
A 1 1
B 2 2
C 3 3
D 4 4
A 5 5
B 6 6
C 7 7
D 8 8
A 9 9
B 10 10
That means, the size of the index table will be somewhere between the cardinality (minimally) and the table size (maximally).
Please let me know if I make sense.
Regards
Sankar -
Compiling a package without disturbing the load process
Hi,
I need to compile a package, with the changes, in the database without stopping the load process that is using this package. Please let me know if any one has any ideas.
Thankssdk11 wrote:
Hi,
I need to compile a package, with the changes, in the database without stopping the load process that is using this package. Please let me know if any one has any ideas.
ThanksIf you mean: "I need to create or replace a package", while some session is still running code of that package.
Then sorry: nocando.
Unless you are on 11.2, in which case you could (with the necessary preparation/configuration done first) create a new version of the package in another edition than the session is using. But the session will have to finish its work using the package as-is currently.
Maybe you are looking for
-
Macbook pro hasnt worked after I installed Yosemite when it first came out
Hello, Irritated, upset and lost. I have found myself the last week or so feeling those 3 words. What happened? I downloaded Yosemite the day it came out. (the free upgrade) I didn't think Anything of it. figured just like maverick it was perfectly f
-
how do I change the email address that I originally set iCloud up with? The email address no longer exists.
-
Hi, I need to configure the CBP based on Past consuption data of 3 months and also it should give me future forecast values for 3 months. also I want to use safety stock and Reorder level. Which MRP type should be used and what are configuration sett
-
IOS 8.1.1 iLife missing
HI, I bought two iPhone 6's the other day for my wife and I. They both started out with garage band and iMovie already installed, which I believe is normal. However, my phone said there was an update to iOS 8.1.1 which I carried out, and now all the
-
Equifax - Amex reporting weirdness - anybody understand this?
I just got an alert that an account that has been inactive has reported a balance, It's my Amex ED that reported a balance of $1. The thing is that the account is active every month, I just pay it before the statement cuts. Another weird thing is tha