Training on CalcScripts, Reporting Scritps, MaxL and Data Loading

Hi All
I am new to this forum. I am looking for someone who can train me on topics like CalcSripts, Reporting Scripts, MaxL and Data Loading.
I am willing to pay for your time. Please let me know.
Thanks

Hi Friend,
As you seems to be new to essbase,you must learn What is Essbase, OLAP, Difference between Dense & Sparse, and then use "essbase tech ref" for more reference
After that go through
https://blogs.oracle.com/HyperionPlanning/and start exploring CalcScript, Maxl etc
and all this for you free free free..........
Thanks,
Avneet

Similar Messages

  • Report Preview Tab and data loading

    I have 2 questions:
    When I first started developing a report, when I opened the file I had both the design and preview tabs displayed. It seems after I downloaded and viewed 1 of the sample reports, the preview tab disappeared (tho, it's not obvious the 2 are related). I checked the box under Options/Layout/Display Preview Panel, but it did not seem to help. Is there something else I missed?
    The other question, and this may have an effect on my 1st question, is that when I open my report file, the database records are not automatically loaded (thus, nothing to preview at that point). Is there a way to set an automatic log on to the database so that the load time happens at the beginning and not when I actually want to preview results?
    Thanks in advance....Steph

    Don,
    Thanks for the response. I tried what you suggested and it does not seem to work all the time. There may be something else that I need to consider.
    I was wondering...when I do the "Save Data with Report", does it save ONLY the data? Or does it save some intelligent connection back to the database?  Also, is this function the same as the one listed under Options/Reporting? I selected the one on the File menu (as you suggest), but it doesn't seem to change the selection on the Options/Reporting window.
    Apologies for my attempt at a "2-fer-1". I only did that because I believed the 2 might be related...maybe not. I'll repost as a separate issue/concern.
      Cheers...Steph

  • EIS Member and Data Load-Getting OS Error-Please help!

    Please help! I have created a OLAP model then created a Metaoutline.
    Then I went ahead to do the Member and Data load. I logged into my server and started the member and data load.
    Then it gives me the following error:
    SELECT /*+ */ .. FROM <my_view_name>
    OS Error No such file or directory IS Error Member load terminated with error.
    The load terminated with errors.
    Thanks in advance for any replies.

    thanks all! the error has been resolved.
    Jus had to create a directory in the Integration services folder: $ISHOME/loadinfo
    the loadinfo folder was missing.
    Prathap,
    Is that view available at that time? --the query is generated automatically.
    Which Data Source and which version of the Hyperion - datasource is Oracle10g and 9.3 is Hyp version.

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • Chart report Condition fields and Data fields

    Hi all,
    i have tried chart report by adding two condion fields and one data field, the report is more meaning full in this scenario. the first condition field is taken as x-axis, the 2nd condition fields is taken as legend.
    while adding more condition fields and data fields, i feel its not showing meaningfull data.
    can anyone explain how the condition fields and data fileds are manipulated by crystal report.
    i am using CR XI R2 Server.
    Thanks
    Padmanaban V

    i am using Crystal Report XI R2 RAS Embedded in my server.
    as we can add any number condition fields programatically using the method
    ConditionField.Add(FieldObj), i would like to know how these fields are manipulated internally by the RAS server.
    that means, what is the significance of condition fieldobject 1, condition fieldobject 2,condition fieldobject 3 etc...
    if i add more than two condition fields , RAS Chart Report always returns 0 as legend value for all legends.
    Thanks in advance
    Regards,
    Padmanaban V
    Edited by: Padmanaban Viswanathan on Dec 22, 2008 9:53 AM

  • Regarding reporte execution time and date

    hi all,
    Is there any table in sap which can give me date and time of last execution of a report.
    Thanks and regards .
    Punit

    Hi
    I exactly don't know but i can provide you with the list of table names where BASIS people are gonna use them
    http://www.saptechies.com/list-of-important-transaction-codes-tables-programs-reports/
    check out this tabel  i hope it helps you
    DBSTATC
    Regards
    Pavan
    Edited by: Pavan Bhamidipati on Jul 18, 2008 7:57 AM

  • Refresh Webi reports automatically after BI data loads ?

    Hello,
    We are about to install BOE X.1 on top of BI 701.
    The idea would be that the users would perform their WebI daily reporting, but obviously only after SAP BI night batch scheduling is finished, and the BI infoproviders filled.
    I've read that BOE ofers you the ability to refresh the reports upfront.
    We were wondering if there is a way to easily implement this logical dependance : to refresh the Webi Report only after the end of BI data loads.
    There is off course, the possibility to use an external scheduler, but we have checked the licencing and it's quite expensive.
    Is there another way to do so ?
    Many thanks for your attention.
    Best Regards.
    Raoul

    Hi Alan,
    Thank you very much for your quick answer.
    I would like to make sure that I understand you since I'm not very familliar with BOE :
    First , we create an event in the CMC and connect it to a file location
    Then we schedule the document and add the file event : do you mean schedule the webi report  ?
    Finally create the file  as part of the Bex query refresh process : how exactly do we do that, in the BI process chains ?
    Thank you very in advance for your help.
    Best Regards.
    Raoul

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • What are the CRM Trasaction data Data Sources and Data load Procedure

    Hi BI Gurus,
    Does anybody provide the CRM Transaction data DataSources names and Load procedure into BI7.0
    I know the master Data load procedure from CRM to BI7.0.
    if you provide Step-by-Step documents it is more help.
    Thanks in Advance,
    Venkat

    Hi Venkat
    In order to find the transactions want you can login the CRM system and then use the transaction RSA6. There you can expand all the subtrees by left clicking on the first line and then clicking expand the further left button. After that you can easily search any datasource you may want
    Hope that helps
    Rgds
    John

  • Relationship between ERPi metadata rules and Data load rules

    I am using version 11.1.1.3 to load EBS data to ERPi and then to FDM. I have created a Metadata rule where I have assigned a ledger from the Source Accounting Entities. I have also created a Data Load rule that references the same ledger as the metadata rule. I have about 50 ledgers that I have to integrate so I have added the source adapters that reference the Data Load rule.
    My question is... What is the relation between the Meatdata rule and the Data load rule with the ledger? If you can specify only one Ledger per metadata rule, then how does FDM know to use another metadata rule with another ledger attached to it? Thanks!!!

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Selective Deletion and Data Load from Setup Tables for LIS Extractor

    Hi All,
    We came across a situation where one of the delta in PSA was missed to load in DSO. This DSO updates another cube in the flow. Since it has been many days since this miss come in our knowledge we need to selectively delete data for those documents from DSO & Cube and then take a full load for the Documents filling the setup table.
    Now what will be the right approach to load this data from setup table > DSO > cube. There is change log present for those documents and a few KPI's in DSO are in summation mode.
    Regards
    Jitendra

    thanks Ajeet!!!!
    This is Sales Order extractor, the data got loaded to ODS just fine, but since the data is coming to ODS from different extractor. Everything is fine in ODS, but not in the cube. Will Full repair request and Full load would it make difference when the data is going to cube? I thought that it would matter only if I am loading to ODS.
    what do you mean "Even if you do a full load without any selections you should do a full repair ".
    thanks.
    W

  • Query has to display completed quarters and Data loaded month.

    Hi All,
    I have  2 issues in  BEX .
    1. I need to display  data  loaded month(0calmonth):
    Ex: Jan  2007  data has loaded in May 2007, my query has to display  for  3 months JAN 2007, Dec 2006, November 2006.
    2. completed quarters :  if we are in mid of 3 quarter, query has to display Quarter1 and quarter2 (completed)  ,nothing has to display in quarter 3.
    Thanks in advance for your inputs.

    1. you mentioned data loded month... but you are pointing Calender month that data belongs to? Are you want display based on Calender Month that data belongs to (Jan 2007) or Loded Month (May 2007) or Last Loded Month?
    2. create User Exit variable on Quarter or Month IO. in the User Exit Code, check which Quarter that current month belongs to? if it is mid of 3rd quarter, pass Quarter 1 and 2 (or for month 1 to 6).
    hope this helps.
    Nagesh Ganisetti.
    REMOVED

  • Driver dimension and data load dim

    Hello ,
    Can anyone explain the difference between Driver dimension and normal dimension..(data load dimension)
    Thank You!!!

    Hi,
    Driver dimensions are the varying column dimensions that reside in the data table or file (i.e. columns such as customer, product, date etc.). Single default members from other dimensions are fixed in the POV during data load (i.e. Actual scenario is usually fixed when loading actual data).
    Cheers,
    Alp

Maybe you are looking for

  • Issue with feature 'with Tray ' of a Webitem

    Hello, I have several webitems placed in my webtemplate like Textelemens,Navigation pane,Condition,exception...etc. Properties: Display  'With tray' is on and Expanded is set to 'off'' (so that user can expand/collapse' tray later as required). After

  • Manifest Deployment Fails

    Hi, I had a working SSIS package deployment methodoligy (double clicking on the manifest file) and suddently it's not working.  I'm running SSIS as administrator and get the error below. TITLE: Package Installation Wizard You need to specify the full

  • BOM report on Production order

    Dear All, I need to make BOM report as per production order. Very helpfull for me if any FM for that? I need also the table where I can get the relation of that. Can anybody help me? Thanks in advance.

  • Acroread 9.1.1 freezes X11 ...

    Fairly regular, a starting acroread (before even displaying anything) can hang the whole X11 session. Once you kill the process -- either by switching to a text shell with e.g. Ctrl-Alt-F1 or from remote machine in case the former does not work (as s

  • Php appli and apache agent

    Hi, I've used the j2ee agent with a j2ee web app and it is pretty easy to retrieve the username and role of the user with the getUserPrincipal and isUserInRole methods. I'd like to know if I can grant access to php applications running on a apache se