Driver dimension and data load dim

Hello ,
Can anyone explain the difference between Driver dimension and normal dimension..(data load dimension)
Thank You!!!

Hi,
Driver dimensions are the varying column dimensions that reside in the data table or file (i.e. columns such as customer, product, date etc.). Single default members from other dimensions are fixed in the POV during data load (i.e. Actual scenario is usually fixed when loading actual data).
Cheers,
Alp

Similar Messages

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • EIS Member and Data Load-Getting OS Error-Please help!

    Please help! I have created a OLAP model then created a Metaoutline.
    Then I went ahead to do the Member and Data load. I logged into my server and started the member and data load.
    Then it gives me the following error:
    SELECT /*+ */ .. FROM <my_view_name>
    OS Error No such file or directory IS Error Member load terminated with error.
    The load terminated with errors.
    Thanks in advance for any replies.

    thanks all! the error has been resolved.
    Jus had to create a directory in the Integration services folder: $ISHOME/loadinfo
    the loadinfo folder was missing.
    Prathap,
    Is that view available at that time? --the query is generated automatically.
    Which Data Source and which version of the Hyperion - datasource is Oracle10g and 9.3 is Hyp version.

  • How do I move my current hard drive OS and data to a new drive for internal installation?

    I want to replace my Macbook Pro 500GB Hard Drive with a 2TB Hard Drive.  How do I make this new drive bootable and a replica of my current drive programs and data?   I also want to upgrade the OS.  Is it best to mirror the drive and then upgrade the OS, or can I put the new OS on the new drive and then clone the old drive onto it?

    For starters I would do this one step at a time. It is far easier to check out where the problems are when they develop that way.
    My first step would be to backup everything that you have now as a first step. Just in case of problems later.
    Next, I would suggest to place the new 2TB disk drive in an enclosure and format it to Mac OS Extended and GUID partition mapping first.
    Once that is done you can create a bootable clone on the 2TB disk drive using either CCC or SuperDuper.
    Then test to be sure the 2TB is bootable.
    Then move the 2TN disk drive into the Mac.
    Once you have done that and it boots them you can upgrade the operating system.

  • Training on CalcScripts, Reporting Scritps, MaxL and Data Loading

    Hi All
    I am new to this forum. I am looking for someone who can train me on topics like CalcSripts, Reporting Scripts, MaxL and Data Loading.
    I am willing to pay for your time. Please let me know.
    Thanks

    Hi Friend,
    As you seems to be new to essbase,you must learn What is Essbase, OLAP, Difference between Dense & Sparse, and then use "essbase tech ref" for more reference
    After that go through
    https://blogs.oracle.com/HyperionPlanning/and start exploring CalcScript, Maxl etc
    and all this for you free free free..........
    Thanks,
    Avneet

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Hard Drive Failure and Data Recovery

    I am pretty sure I am SOL - but thought I'd try one last "Hail Mary" before throwing in the towel - to see if anyone here might have a solution I haven't thought of or am not aware of:
    My wife's Macbook wouldn't boot today - instead she was greeted by the dreaded blinking question mark.
    Macbook (2006) 4 gb ram - OSX 10.6.6 120GB internal HD (the internal HD is the problem).
    I shut it down and attached a USB external HD that has a copy of OSX on it and it booted fine - but the internal HD would not mount. I tried both Drive Genius and Disk Warrior - neither program sees the damaged hard drive. I then launched Disk Utility - it sees the damaged HD as an UNINITIALIZED  disk - and the only options are to initialize it - all the repair options are grayed out.
    So I went poking around on the net looking for hard drive problems that the leading disk recovery programs can not fix and stumbled on to a thread discussing invalid sibling links - which is apparently the Achilles Heal of the OSX HFS file system - and can render the data on a HD inaccessible.
    The thread went on to discuss a possible fix by running a series of fsck_hfs commands from the terminal window - or if the HD will mount - via single user mode or verbose login. Other options included attaching the problem Mac to another Mac via Target Disk Mode and many had success being able to see and repair the drive that way.
    If anyone is interested in the details of this fix you can sift through this thread on Mac OSX Hints:
    http://hints.macworld.com/article.php?story=20070204093925888
    Unfortunately for me - nothing is working. The only way I can get Disk Utility to see the damaged drive is if I boot from the external USB HD. And As I mentioned above -  Disk Utility only sees it as an uninitialized drive.
    If I attach the Macbook to my MacPro in Target Disk Mode - the Macbook starts up in TDM but of course the HD never mounts on my MacPro and neither Disk Utility, Drive Genius or Disk Warrior are able to see the damaged macbook drive - in TDM none of these programs even show that the physical drive exists.
    So, short of re-initializing the drive and thus loosing all the data - anyone out there in cyber space have any suggestions?
    Thanks!

    Just an update and a new question - I decided to go ahead a reformat the drive - but Disk Utility is giving me the following error message:
    Disk Erase Failed with Error:
    POSIX reports: The operation couldn't be completed.
    Cannot allocate memory.
    So, does this sound like a hardware problem as oppossed to just a corrupted file system?
    Thanks!

  • What are the CRM Trasaction data Data Sources and Data load Procedure

    Hi BI Gurus,
    Does anybody provide the CRM Transaction data DataSources names and Load procedure into BI7.0
    I know the master Data load procedure from CRM to BI7.0.
    if you provide Step-by-Step documents it is more help.
    Thanks in Advance,
    Venkat

    Hi Venkat
    In order to find the transactions want you can login the CRM system and then use the transaction RSA6. There you can expand all the subtrees by left clicking on the first line and then clicking expand the further left button. After that you can easily search any datasource you may want
    Hope that helps
    Rgds
    John

  • Can I sync from iPad to a Mac as hard drive crashed and data lost

    My iMac hard drive crashed and the data cannot be recovered. Can I sync the iTunes data on my iPad or iPhone back onto the new hard drive?

    You should be able to recover data that is in the backup. That does not included some items like music. See the following for what is in the backup:
    http://support.apple.com/kb/HT4079
    First in iTunes preferences check the box for prevent automatic syncing when a device is connected.
    They connect your iPod to the computer and transfer iTunes purchases to the computer. See:
    http://support.apple.com/kb/HT1848
    Then right click on the iPad listed under Devices in the left hand side and click on "Back Up". After backup is complete I think you will have to restore the iPad from backup to complete the process of using your iPad to the "new" computer.

  • Relationship between ERPi metadata rules and Data load rules

    I am using version 11.1.1.3 to load EBS data to ERPi and then to FDM. I have created a Metadata rule where I have assigned a ledger from the Source Accounting Entities. I have also created a Data Load rule that references the same ledger as the metadata rule. I have about 50 ledgers that I have to integrate so I have added the source adapters that reference the Data Load rule.
    My question is... What is the relation between the Meatdata rule and the Data load rule with the ledger? If you can specify only one Ledger per metadata rule, then how does FDM know to use another metadata rule with another ledger attached to it? Thanks!!!

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Selective Deletion and Data Load from Setup Tables for LIS Extractor

    Hi All,
    We came across a situation where one of the delta in PSA was missed to load in DSO. This DSO updates another cube in the flow. Since it has been many days since this miss come in our knowledge we need to selectively delete data for those documents from DSO & Cube and then take a full load for the Documents filling the setup table.
    Now what will be the right approach to load this data from setup table > DSO > cube. There is change log present for those documents and a few KPI's in DSO are in summation mode.
    Regards
    Jitendra

    thanks Ajeet!!!!
    This is Sales Order extractor, the data got loaded to ODS just fine, but since the data is coming to ODS from different extractor. Everything is fine in ODS, but not in the cube. Will Full repair request and Full load would it make difference when the data is going to cube? I thought that it would matter only if I am loading to ODS.
    what do you mean "Even if you do a full load without any selections you should do a full repair ".
    thanks.
    W

  • Query has to display completed quarters and Data loaded month.

    Hi All,
    I have  2 issues in  BEX .
    1. I need to display  data  loaded month(0calmonth):
    Ex: Jan  2007  data has loaded in May 2007, my query has to display  for  3 months JAN 2007, Dec 2006, November 2006.
    2. completed quarters :  if we are in mid of 3 quarter, query has to display Quarter1 and quarter2 (completed)  ,nothing has to display in quarter 3.
    Thanks in advance for your inputs.

    1. you mentioned data loded month... but you are pointing Calender month that data belongs to? Are you want display based on Calender Month that data belongs to (Jan 2007) or Loded Month (May 2007) or Last Loded Month?
    2. create User Exit variable on Quarter or Month IO. in the User Exit Code, check which Quarter that current month belongs to? if it is mid of 3rd quarter, pass Quarter 1 and 2 (or for month 1 to 6).
    hope this helps.
    Nagesh Ganisetti.
    REMOVED

  • BPC and Data Loads

    Hi all,
    I'm sure i've read an article showing me how to automate data loads, (transactional) from the BW into BPC. Does anyone know where i can find the document.
    I also would like to know what the function is called. i was sure it was load_infprovider.

    All of the BPC related How-to Guides are here:
    https://wiki.sdn.sap.com/wiki/display/BPX/Enterprise+Performance+Management+%28EPM%29+How-to+Guides
    You can read about the standard Data Manager package to [import transaction data from an InfoProvider|http://help.sap.com/saphelp_bpc70sp02/helpdata/en/af/36a94e9af0436c959b16baabb1a248/content.htm] in the SAP help documentation at
    http://help.sap.com
    SAP Business User > EPM Solutions > Business Planning and Consolidation
    [Jeffrey Holdeman|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/profile/jeffrey+holdeman]
    SAP BusinessObjects
    Enterprise Performance Management
    Regional Implementation Group

Maybe you are looking for

  • GET_BLOCK Error Importing or Copying an Application - Report Layout Issue

    Greetings Environment Apex 4.1.0.00.32 Database 11.2 Listener: Apex Listener running under Weblogic I have an application that is throwing an error when I try to copy or import it using the Apex development environment. The error is: ORA-20001: GET_S

  • Internet Explorer OLE2 problems

    I am trying to control Internet Explorer 6 using OLE2 from forms to open and print an html document. I can open the html document, but I cannot print. Here is the code: declare webhandle ole2.obj_type; args OLE2.LIST_TYPE; begin webhandle := OLE2.CRE

  • Reader 8, Notes/Comments and Permissions

    Greetings, We use FrameMaker and Word to create PDF manuals for our field technicians. They use Reader to view them. We discovered that Reader 8 has the ability to put in notes, comments, even a number of things I always equated with Acrobat for draw

  • How to create a field with underscore

    I want to create something like this: ___n___ # scrubbed ___x___ of __n____ did scrub correctly The user will fill in n and x fields to compare if x = n. I don't want to use text field. Is there some way to achieve this? Thanks. Message was edited by

  • How to run lint in the Sun Studio 12 IDE

    I'm a newbie to the Sun Studio IDE, and I can't find a way to run lint in the IDE. The C compiler does not give many warnings (unused variables, etc), and lint does, so I want to get the lint diagnostics within the IDE. I can run lint from a terminal