BIG Question regarding INIT Loads..I am scratching my head for past 2 days
Hello All,
I have one big question regarding the INIT load. Let me explain you my question with an example:
I have a datasource say 2LIS_02_ITM for which I ran the setup tables on JAN 2008. The setup table is filled with say 10,000 records. After that we are running Delta loads everyday till today (almost 30,000 records). Everything is going fine.
Imagine I lost a delta (delta failed and did'nt I do a repeat delta for 3 days, by then next delta came), Hence I am planning to do a Init load again without deleting / filling the setup tables. Now my question is :
1) Will I get the 10,000 records which got filled initially in the setup tables or will I get 40,000 records (that 10,000 + 30,000 records)
2) If it bring 40,000 how is that possible as I hav'nt filled the setup tables?
3) Is it each time I need to fill the setup tables to start the Init load.
Waiting for your guidance.........
Yes...to answer your question
But...agin I will suggest not to go by that method. Only one delta had failed so why do you want to wipe away the timestamp of the last init just becasue of 1 delta failure? it is possible to correct it
First of all i hope you have stopped the process chain or put a hold to Delta jobs
Now, To answer your doubt, about the status red ..See you mark the status red in each of the the data target before delting the failed request from there (I usually do not just delete the requests frm data targets - I mark them red and then delte it to be safe).
Do not forget to delete subsequent requests afterthe failed delta (agin I always mark them red before doing so)
Regarding the actual reuest itself...you need not mark it red in the monitor..You have got to correct this request and successful - it will automatically turn green.
If after successfully correcting the request, it still does not turn green, you can set qm status green. The next delta will recognize this a successful request and bring in the next delta (this happens in cases where you have the infopackage in a process chian when it shows green because subsequent processes were not completed etc).
Let me know in case you have any questions.
Similar Messages
-
Help! I have a iPad 4, I play bingo bingo, for 5 days now , it freezes after giving my tickets when it says loading . Can someone help get this problem resolved?
Try:
- iOS: Not responding or does not turn on
- Also try DFU mode after try recovery mode
How to put iPod touch / iPhone into DFU mode « Karthik's scribblings
- If not successful and you can't fully turn the iOS device fully off, let the battery fully drain. After charging for an least an hour try the above again.
- Try another cable
- Try on another computer
- If still not successful that usually indicates a hardware problem and an appointment at the Genius Bar of an Apple store is in order.
Apple Retail Store - Genius Bar -
Questions regarding *dump_dest parameters and fast_recovery_area
Hello,
I just installed a fresh new 11.2.0.2 Database on Solaris 10.
Everything straightforward on the parameter side!!! I tried custom install as well as general purpose template. When installing with DBCA, I set every parameters around DB Name in lowercase name.
With this, questions are popping in my mind regarding some parameters after installation.
First, %dump_dest parameters contains in path, two times the db name (ocpdb in my case):
background_dump_dest /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
user_dump_dest /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
core_dump_dest /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdumpIs it normal to have ..../rdbms/dbname/dbname/..... as path, with dbname/dbname ??? Why?
Second, the question regarding the directory structure under fast_recovery_area (new term for flash_recovery_area). The directory structure:
oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l
total 2
drwxr-x--- 2 oracle oinstall 512 2010-10-28 19:53 ocpdb
drwxr----- 5 oracle oinstall 512 2010-10-29 07:44 OCPDB
oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l ocpdb
total 9528
-rw-r----- 1 oracle oinstall 9748480 2010-10-31 21:09 control02.ctl
oracle@enalab13:/u01/app/oracle/fast_recovery_area$ ls -l OCPDB/
total 3
drwxr----- 5 oracle oinstall 512 2010-10-31 03:48 archivelog
drwxr----- 3 oracle oinstall 512 2010-10-29 07:44 autobackup
drwxr----- 3 oracle oinstall 512 2010-10-29 07:43 backupsetWhy am I having a subdir with dbname in uppercase AND in lowercase? Should I specify dbname in uppercase at db creation to have all files under the same directory, or in lowercase? Or, is it normal?
I want to know how to do it well before reinstalling a fresh database.
Thanks
Bruno
Edited by: blavoie on Oct 31, 2010 6:18 PM
Edited by: blavoie on Oct 31, 2010 6:20 PMHi,
I just reinstalled all from scratch, everything in lowercase as well in environment variables and dbname in dbca:
oracle@enalab13:~$ echo $ORACLE_SID
ocpdbFast recovery area directories, dates prove that it's my fresh install:
oracle@enalab13:/u01/app/oracle$ ll fast_recovery_area/
total 2
drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
oracle@enalab13:/u01/app/oracle$ ll -R fast_recovery_area/
fast_recovery_area/:
total 2
drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 ocpdb
drwxr-x--- 4 oracle oinstall 512 2010-11-02 11:24 OCPDB
fast_recovery_area/ocpdb:
total 9528
-rw-r----- 1 oracle oinstall 9748480 2010-11-02 11:34 control02.ctl
fast_recovery_area/OCPDB:
total 2
drwxr-x--- 3 oracle oinstall 512 2010-11-02 11:24 archivelog
drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:06 onlinelog
fast_recovery_area/OCPDB/archivelog:
total 1
drwxr-x--- 2 oracle oinstall 512 2010-11-02 11:24 2010_11_02
fast_recovery_area/OCPDB/archivelog/2010_11_02:
total 47032
-rw-r----- 1 oracle oinstall 48123392 2010-11-02 11:24 o1_mf_1_5_6f0c9pnh_.arc
fast_recovery_area/OCPDB/onlinelog:
total 0Some interresting output asked earlier in post:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 4
Next log sequence to archive 6
Current log sequence 6
SQL> show parameter recovery
NAME TYPE VALUE
db_recovery_file_dest string /u01/app/oracle/fast_recovery_area
db_recovery_file_dest_size big integer 4032M
recovery_parallelism integer 0
SQL> show parameter control_files
NAME TYPE VALUE
control_files string /u01/app/oracle/oradata/ocpdb/control01.ctl,
/u01/app/oracle/fast_recovery_area/ocpdb/control02.ctl
SQL> show parameter instance_name
NAME TYPE VALUE
instance_name string ocpdb
SQL> show parameter db_name
NAME TYPE VALUE
db_name string ocpdb
SQL> show parameter log_archive_dest_1
NAME TYPE VALUE
log_archive_dest_1 string
log_archive_dest_10 string
log_archive_dest_11 string
log_archive_dest_12 string
log_archive_dest_13 string
log_archive_dest_14 string
log_archive_dest_15 string
log_archive_dest_16 string
log_archive_dest_17 string
log_archive_dest_18 string
log_archive_dest_19 string
SQL> show parameter %dump_dest
NAME TYPE VALUE
background_dump_dest string /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/trace
core_dump_dest string /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/cdump
user_dump_dest string /u01/app/oracle/diag/rdbms/ocpdb/ocpdb/traceI think, next time, I'll install everything regarding oracle SID in upper case...
Maybe it's details that I don't need to care about... I seems that something is happening bad with the management of fast_recovery_area...
Thanks
Bruno -
FI_AR_4 Infosource(standard) Init Load
Hi Gurus,
I have an issue regarding INIT load for FI_AR_4 Infosource.It shows yellow status though R/3 has data.I got the records into BW using full upload.
To enable DELTA upload, I Intialised the DELTA "<b>without data transfer</b>".It gives me an error that I cannot Initialise once again after FULL upload.
Could you please help me to solve this.
Regards,
VenkatThat is true with the GL if you are using the GL. Once the GL is initialized, AR and AP have to follow that, if GL is not being used, don't worry about it.
Check you ODS status (change -> ODS...). Look at the settings and make sure the "update status automatically" is set, otherwise the system waits for you to turn it green manually.
/smw -
Hello. I'm new at this forum. I hope someone can help me out with the questions I have.
This year (from September 2014 untill May 2015) I'll be gratuating at the University of Applied Sciences in the Netherlands. I'm currently employed during the time period mentioned for creating a course Illustrator CC.When writing down the risks for my future project I came to the conclusion that for me it is really important to know if there will be any significant (big) updates regarding Illustrator CC in this time frame.
For example:
- Will there be any tools in the program that will not be used anymore?
- Will there be any tools that will be made (even) better?
- Will there be any extra tools added to Illustrator?
Should you have any information regarding these questions or links to a blog(s) where I can find this information your help would be very much appreciated.
I'm sure you will understand that I'd rather not make an outdated course so I'm trying to avoid major failure developing a video-course that has any old tools used in it.
Thank you in advance.based on history...
- Will there be any tools in the program that will not be used anymore? - No
- Will there be any tools that will be made (even) better? - very unlikely
- Will there be any extra tools added to Illustrator? - probably, not guaranteed. -
Question regarding deltas and filling of setup tables
Hello Friends,
We are using the Sales Order Item Datasource 2LIS_11_VAITM. We have transported it to BI Production. We have initialized the delta and the deltas are running for the past 3-4 months.
Now we had a new requirement which was getting satisfied by the Sales Order Header and Sales Order Header Status DataSources, 2LIS_11_VAHDR & 2LIS_11_VASTH respectively.
Now we wan to transfer both these (header & header status) DataSources to BI Procution. My question is:
1) Do we have to refill the setup tables again in R/3 for the Sales (11) application?
2) Do we have to reload the entire Sales data again from scratch.
3) Is there any way to transport the 2 new DataSources to BI Production and without disturbing the already running deltas of Sales Order Item Datasource?
Regards,
Prem.Hi,
1) Do we have to refill the setup tables again in R/3 for the Sales (11) application?.
Yes you need to refill the setuptables, because if you want to load deltas, you need to do Init then deltas, other wise first time when you are filled setup tables, (first load init with the existing data in setuptables) from that day to till date you can do full then you can set delta.
It is better and good to fill setup tables and load.
2) Do we have to reload the entire Sales data again from scratch.
Any way you need down time to load the other 2 datasources, and also for 11 application you are filling setuptables.So for perfectness it is better to load from first.
3) Is there any way to transport the 2 new DataSources to BI Production and without disturbing the already running deltas of Sales Order Item Datasource?
If you transport the new datasources then delta won't distrub in BW. If you did any changes to DataSource and trying to transport it to production then it will give error, because LBWQ and RSA7 will have data so first you need to clear it and then transport it.
Thanks
Reddy -
I have some questions regarding setting up a software RAID 0 on a Mac Pro
I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
I should be most grateful to receive advice regarding the following questions:
QUESTION 1:
I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
Or would I have to reinstall the OS, applications and software again?
I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
QUESTION 2:
I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
QUESTION 3:
Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
Or would this not be of any use anyway, in the event of a single drive failure?
QUESTION 4:
Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
QUESTION 5:
If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.Well said.
Steps to set up RAID
Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
1. Open Disk Utility (/Applications/Utilities).
2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
4. Name the RAID set.
5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
6. Click Create.
Recovering from a hard drive failure on a mirrored array
1. Open Disk Utility in (/Applications/Utilities).
2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
3. If an issue with the disk is indicated, click Rebuild.
4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
5. Repeat steps 1 and 2.
6. Drag the icon of the new disk on top of that of the removed disk.
7. Click Rebuild.
http://support.apple.com/kb/HT2559
Drive A + B = VOLUME ONE
Drive C + D = VOLUME TWO
What you put on those volumes is of course up to you and easy to do.
A system really only needs to be backed up "as needed" like before you add or update or install anything.
/Users can be backed up hourly, daily, weekly schedule
Media files as needed.
Things that hurt performance:
Page outs
Spotlight - disable this for boot drive and 'scratch'
SCRATCH: Temporary space; erased between projects and steps.
http://en.wikipedia.org/wiki/StandardRAIDlevels
(normally I'd link to Wikipedia but I can't load right now)
Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
To help understand and configure your 2009 Nehalem Mac Pro:
http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
http://macperformanceguide.com/
http://www.macgurus.com/guides/storageaccelguide.php
http://www.macintouch.com/readerreports/harddrives/index.html
http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
http://kb2.adobe.com/cps/404/kb404440.html -
Question regarding MultiProvider
I have 2 cubes, One is Snapshot of Daily Inventory cube, which gets refreshed everyday with Full Init load. The other cube is a Monthly Inventory Cube which gets updated every month end. Each of them have the same dimension and the same KeyFigures.
The requirement is to have a report which gives you information of today's date and past 11 months of Inventory value.
I have to make a Multiprovider on top of those 2 cubes. <b>Now my question is, do I need to assign all the Dimensions from both the cubes or one of the cubes is fine?</b> My reports will have 12 Columns, one for Yesterday's date and then 11 others for the past 11 month end dates.
<b>Does that same go for KeyFigures?</b> I am guessing for the KFs I will have to assign from both but please advise.
Points will be given generously.
ThanksHi Jayesh,
As you have mentioned all the KF and dimension are same, you can keep the dimension in the multiprovider same as well. And you can select characteristics and KF for both the cubes as well. And while displaying the key figure values use 0INFOPROV (Avalable under Data package Dimension) and restrict it to the cube names for the relevant key figures.
Hope it helps.
Regards. -
How to see the init load in bi 7.0
hello frnds,
In bi 7.0 when v load data it show the full load and delta load option only,,so my question is, if delta load r there then definately init load also avilable,so where is that init load,,plz guys tell me where is that option,,
2nd question is about compression in bi 3.5,,after loading data into infocube when i do compression it ll not work,,why i dont know ,,plz help me
Thanks
RosyHi Rosy,
Init with data load is Full upload + Delta Enables. Once you run this it will fetch entire records from the Source, delta will be automatically enabled and next time onwards only delta will be coming to BW.
Before you can request a delta update, you must first initialize the delta process.Delta initialization can only be simulated for DataSources from SAP source systems if the DataSource supports this.
I hope the below link will be helpful,
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c0db2280-8242-2e10-b6a5-86a46d0feb25?QuickLink=index&overridelayout=true
Regarding compression, please check the below link,
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c035d300-b477-2d10-0c92-f858f7f1b575?QuickLink=index&overridelayout=true
Thanks,
Vinod -
Question about Finder-Load-Beans flag
Hi all,
I've read that the Finder-Load-Beans flag could yield some valuable gains in performance
but:
1) why is it suggested to do individual gets of methods within the same Transaction
? (tx-Required).
2) this strategy is useful only for small sets of data, isn't it? I imagine I
would choose Finder-Load-Beans to false (or JDBC) for larger sets of data.
3) A last question: its default value is true or false ?
Thanks
FrancescoBecause if there are different transactions where the get method is called
then the state/data of the bean would most be reloaded from the database. A
new transactions causes the ejbLoad method to be invoked in the beginning
and the ejbStore at the end. That is the usual case but there are other ways
to modify this behavior.
Thanks
Gaurav
"Francesco" <[email protected]> wrote in message
news:[email protected]...
>
Hi thorick,
I have found this in the newsgroup. It's from R.Woolen answering
a question about Finder-Load-Beans flag.
"Consider this case:
tx.begin();
Collection c = findAllEmployeesNamed("Rob");
Iterator it = c.iterator();
while (it.hasNext()) {
Employee e = (Employee) it.next(); System.out.println("Favorite color is:"+ e.getFavColor());
tx.commit();
With CMP (and finders-load-beans set to its default true value), thefindAllEmployeesNamed
finder will load all the employees with the name of rob. The getFavColormethods
do not hit the db because they are in the same tx, and the beans arealready loaded
in the cache.
It's the big CMP performance advantage."
So I wonder why this performance gain can be achieved when the iterationis inside
a transaction.
Thanks
regards
Francesco
thorick <[email protected]> wrote:
1) why is it suggested to do individual gets of methods within thesame Transaction
? (tx-Required).I'm not sure about the context of this question (in what document,
paragraph
is this
mentioned).
2) this strategy is useful only for small sets of data, isn't it? Iimagine I
would choose Finder-Load-Beans to false (or JDBC) for larger sets ofdata.
>
If you know that you will be accessing the fields of all the Beans that
you get back from a
finder,
then you will realize a significant performance gain. If one selects
100s or more beans
using
a finder, but only accesses the fields for a few, then there may be some
performance cost.
It could
depend on how large some of the fields are. I'd guess that the cost
of 1 hit to the DB per
bean vs.
the cost of 1 + maybe 1 more hit to the DB per bean, would usually be
less. A performance
test using
your actual apps beans would be the only way to know for sure.
3) A last question: its default value is true or false ?The default is 'True'
-thorick -
Difference: Full load, Delta Load & INIT load in BW
Hi Experts,
I am new to SAP BW Data warehousing.
What is the difference between Full load, Delta Load & INIT load in BW.
Regards
PraveenHi Pravin,
Hope the below helps you...
Full update:
A full update requests all the data that corresponds with the selection criteria you have determined in the scheduler.
You can indicate that a request with full update mode is a full repair request using the scheduler menu. This request can be posted to every data target, even if the data target already contains data from an initialization run or delta for this DataSource/source system combination, and has overlapping selection conditions.
If you use the full repair request to reload the data into the DataStore object after you have selectively deleted data from the DataStore object, note that this can lead to inconsistencies in the data target. If the selections for the repair request do not correspond with the selections you made when selectively deleting data from the DataStore object, posting this request can lead to duplicated data records in the data target.
Initialization of the delta process
Initializing the delta process is a precondition for requesting the delta.
More than one initialization selection is possible for different, non-overlapping selection conditions to initialize the delta process when scheduling InfoPackages for data from an SAP system. This gives you the option of loading the data
relevant for the delta process to the Business Information Warehouse in steps.
For example, you could load the data for cost center 1000 to BI in one step and the data for cost center 2000 in another.
The delta requested after several initializations contains the upper amount of all the successful init selections as the selection criterion. After this, the selection condition for the delta can no longer be changed. In the example, data for cost
centers 1000 and 2000 were loaded to BI during a delta.
Delta update
A delta update requests only the data that has appeared since the last delta. Before you can request a delta update you first have to initialize the delta process. A delta update is only possible for loading from SAP source systems. If a delta update fails (status red in the monitor), or the overall status of the delta request was set to red manually, the next data request is carried out in repeat mode. With a repeat request, the data that was loaded incorrectly or incompletely in the failed delta request is extracted along with the data that has accrued from this point on. A repeat can only be requested in the dialog screen. If the data from the failed delta request has already been updated to data targets, delete the data from the data targets in question. If you do not delete the data from the data targets, this could lead to duplicated data records after the repeat request.
Repeat delta update
If the loading process fails, the delta for the DataSource is requested again.
Early delta initialization : This is a advanced concept, but just for your information ...
With early delta initialization you can already write the data to the delta queue or to the delta tables in the application while the initialization is requested in the source system. This enables you to initialize the delta process, in other
words, the init request, without having to stop posting the data in the source system. You can only carry out early delta initialization if this is supported by the extractor for the DataSource that is called in the source system with this data
request. Extractors that support early delta initialization were first supplied with. -
Init load very long runtime 0CRM_COMPLAINTS SALES Analyses
Hi,
We have launched a init load (infopackage 0CRM_COMPLAINTS)(BI 7.0) which extract 180201 claims from our CRM 2007 system.
The first packet (data package size 5000 records) have been extracted in 5minutes, but when it begins the 10 packets, it last 20min and 30 between each of them.
This treatment lasted 12 hours (we don't see optimization point at the moment)
The first thought is to decrease packet size but I don't find specific note about this problem, or any lead.
Any help will be grateful.
ChrisHI
you check the SAP Note 692195 - Question 1 ( shown below )
Question 1 : The Extraction from CRM to BW takes a very long time. What can be done? (Performance Issues)
Suggestion 1: Please implement notes 653645 (Collective note) and 639072(Parallel Processing).
The performance could be slow because of the wrong control parameters used for packaging.
You can change the package size for the data extraction.
Also note that changing the package size in the transaction SBIW would imply a change for all the extractors. Instead, you could follow the path in the bw system.
Infopackage (scheduler) > Menu 'Scheduler' > 'DataS. default data transfer' > maintain the value as 1500 or 1000(This value is variable)
The package size depends on the Resources available at the customer side (The no of parallel processes that could be assigned = 1.5 times the no of CPU's available approx.)
Regards,
Sathya -
Hello Experts
I am trying to load data from ODS to a Cube and have the following questions regarding DTP behaviour.
1) I have set the extratcion mode of the DTP to delta as I understand that it behaves like init with data transfer for the first time. However, it fetches the records only from the change log of the ODS. Then what about all the records that are in the active table. If it cannot fetch all the records from the active table then how can we term it as init with data transfer.
2) Do I need to have to seperate DTPs - One for full load to fetch all the data from the active table and another to fetch deltas from the change log.
Thanks,
Rishi1. When you choose the Delta as Extraction mode you get the data from only change log table.
Change log table will contain all the records.
Suppose when you run a load to DSO which contains 10 records and activate it. Now those 10 records would be available in Active table as well as Change log.
Now in the Second load you have 1 new record and 1 changed record. When you activate, your active table will have 11 records. The change log will have before and after image records for the changed record along with 11 record.
So The cube needs that images, so that data can't be mismatched with old data.
2.If you run the full load to Cube from DSO you need to delete the old request after the load. which is not necessary in the previous case.
In Bi 7.0 when you choose full load extraction mode you will have flexibility to load the data either from " Active Table or Change log table".
Thanks
Sreekanth S -
Question Regarding MIDI and Sample Accuracy
Hi,
I have 2 questions regarding MIDI.
1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
I've looked through the manual, and couldn't find any specific answer to these questions.
Thanks!
Message was edited by: Matthew UsnickOk, I've done some experimenting, and here are my results.
I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
SO.
Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
This is very good news, and glad I worked this out!
(if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
Message was edited by: Matthew Usnick -
Questions on init.ora file
Hi,
I have some questions on init.ora file. While checking the init file in my system i found that it defines a spfile in a non default location.
The parameter names are like
1) db1.__db_cache_size
2) *._kgl_large_heap_warning_threshold
3) *.sga_target
what do they indicate? I mean what does 'db1.__' , '*._' and '*.' indicate? There are multiple database in the Windows 2003 server and the db version is 10g R1
Regards,
SID
Edited by: SID3 on Jun 29, 2010 5:55 AM
Edited by: SID3 on Jun 29, 2010 5:56 AMSID3 wrote:
From the discussion i guess the following points
1) db1._ means they are specific to databases.
2) *._ and * means they are used accross databases and chaging it in any of the initfile might affect all? I seriously doubt this.You are mistaken when speaking about database. Bear in mind, a database is not an instance.
* here means this parameter will apply to any instance of that database (if it is in RAC), or to the only one instance of that database (if it is in non-RAC).
Find out more :
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2013.htm#i2146449
Nicolas.
Maybe you are looking for
-
Error 4 when trying to audio or video chat
I have been trying to video chat with my dad in France. It worked perfectly for about 30 seconds and now every time we try to contact each other we get this communication error message (error _4). We are both using AIM screen names. We have tried cha
-
Firewire Connect to a PC Help!
I'm trying to connect my IMac to my laptop (PC) using Firewire. I know you can link two Macs using Target disk mode but is it possible to link a PC to a MAC So I can copy my old tunes from my PC Any help greatly appreciated!!!
-
I lost my keycode and can`t reset my iphone 4
please, i can not reset my iphone because i lost the key code
-
I just bought the Nokia 6500 slide phone and wish to know how I can select music stored in the phone music library as my ring tone.Kindly advise. Many Thanks ! jc
-
Color a cell when i select THAT cell
Hi! I searced for something about my question, but i didn't find anything about it :/ I have to color one cell of a JTable when i select it and make some operation. I tried with this: java.awt.Component cell = jTable1.getComponentAt(jTable1.getSelect