OBIEE Performance with lakhs of records

Hi,
I have 4 laks records in my OBI Report. The default no of records for display is 25 in OBI.
When I try to expand it to all pages, the page gets struck and it is not displaying all the pages.
It does not respond and I need to manually kill he process via task manager.
What can be done here? If I enable the cache will it solve the purpose?
Ideally, How reports with lakhs of records are mainitained in OBI.
If i download the data , then I am able to see all records but still the size of the file is 120 MB.
Cant I manually see all the 4 lakh records in OBI witthout download? If I can, what needs to be done here?
Please help....

Hi,
Increase the ResultRowLimit in instanceconfig.xml file and check once.
Refer : http://obiee101.blogspot.com/2010/01/obiee-resultrowlimit.html
Regards,
Srikanth

Similar Messages

  • SCD 2 load performance with 60 millions records

    Hey guys!
    I'm wondering what would be the load performance for a type 2 SCD mapping based on the framework presented in the transformation guide (page A1-A20). The dimension has the following characteristics:
    60 millions records
    50 columns (including 17 to be tracked for changes)
    Has anyone come across a similar case?
    Mark or Igor- Is there any benchmark available on SCD 2 for large dimensions?
    Any help would be greatly appreciated.
    Thanks,
    Rene

    Rene,
    It's really very difficult to guesstimate the loading time for a similar configuration. Too many parameters are missing, especially hardware. We are in the process of setting up some real benchmarks later this year - maybe you can give us some interesting scenarios.
    On the other side, 50-60 million records is not that many these days... so I personally would consider anything more than several hours (on a half decent hardware) as too long.
    Regards:
    Igor

  • How to upload 15 lakhs of record onto ztable with performance

    I have 15 to 20 lakhs of records in my internal table. I'm uploading it onto ztable in se11.
    While doing so, it's taking lot of time.
    I'm writing the following query:
    MODIFY ZTABLE       FROM TABLE  ITAB(INTERNAL TABLE NAME).
    Please let me know if there is any other alternative which gives better performance.
    Moderator message - Moved to the correct forum
    Edited by: Rob Burbank on Feb 2, 2010 11:59 AM

    Hi Rob,
    just trace a
    MODIFY ZTABLE FROM TABLE ITAB
    with ST05 and you will see what the DBI / DB does with this statement.
    If i remember right you are on DB6. Given current releases it migth be possible
    that a MERGE (which can handle arrays) will be used instead of UPDATE / INSERT on
    a row by row basis.
    On most platforms the modify from table is done row by row like this:
    UPDATE dbtab_row.
    IF sy-subrc NE 0.
       INSERT dbtab_row.
       IF sy-subrc NE 0.
          UPDATE dbtab_row.
       ENDIF.
    ENDIF.
    if the rows exist you will only see updates.
    if the rows don't exist you will see updates (no record) followed by an insert (1 record).
    In rare cases you might see update (no record), insert (no record, somebody else inserted this record after the first update), update (1 record).
    Kind regards,
    Hermann

  • What init parameters will help with the OBIEE performance..?

    Hi All
    By any chance does any one have info on What init parameters will help with the OBIEE performance..?
    Thanks

    fast=true ;-)
    What performance is causing you a problem? Data retrieval, general UI navigation? Its a massive area to cover. Can you be more specific?

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Getting error Unable to perform transaction on the record.

    Hi,
    My requirement is to implement the custom attachment, and to store the data into custom lob table.
    my custom table structure is similer to that of standard fnd_lobs table and have inserted the data through EO based VO.
    Structure of custom table
    CREATE TABLE XXAPL.XXAPL_LOBS
    ATTACHMENT_ID NUMBER NOT NULL,
    FILE_NAME VARCHAR2(256 BYTE),
    FILE_CONTENT_TYPE VARCHAR2(256 BYTE) NOT NULL,
    FILE_DATA BLOB,
    UPLOAD_DATE DATE,
    EXPIRATION_DATE DATE,
    PROGRAM_NAME VARCHAR2(32 BYTE),
    PROGRAM_TAG VARCHAR2(32 BYTE),
    LANGUAGE VARCHAR2(4 BYTE) DEFAULT ( userenv ( 'LANG') ),
    ORACLE_CHARSET VARCHAR2(30 BYTE) DEFAULT ( substr ( userenv ( 'LANGUAGE') , instr ( userenv ( 'LANGUAGE') , '.') +1 ) ),
    FILE_FORMAT VARCHAR2(10 BYTE) NOT NULL
    i have created a simple messegefileupload and submit button on my custom page and written below code on CO:
    Process Request Code:
    if(!pageContext.isBackNavigationFired(false))
    TransactionUnitHelper.startTransactionUnit(pageContext, "AttachmentCreateTxn");
    if(!pageContext.isFormSubmission()){
    System.out.println("In ProcessRequest of AplAttachmentCO");
    am.invokeMethod("initAplAttachment");
    else
    if(!TransactionUnitHelper.isTransactionUnitInProgress(pageContext, "AttachmentCreateTxn", true))
    OADialogPage dialogPage = new OADialogPage(NAVIGATION_ERROR);
    pageContext.redirectToDialogPage(dialogPage);
    ProcessFormRequest Code:
    if (pageContext.getParameter("Upload") != null)
    DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("FileItem");
    String strFileName = null;
    strFileName = pageContext.getParameter("FileItem");
    if(strFileName == null || "".equals(strFileName))
    throw new OAException("Please select a File for upload");
    fileName = strFileName;
    contentType = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_MIME_TYPE");
    BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);
    String strItemDescr = pageContext.getParameter("ItemDesc");
    OAFormValueBean bean = (OAFormValueBean)webBean.findIndexedChildRecursive("AttachmentId");
    String strAttachId = (String)bean.getValue(pageContext);
    System.out.println("Attachment Id:" +strAttachId);
    int aInt = Integer.parseInt(strAttachId);
    Number numAttachId = new Number(aInt);
    Serializable[] methodParams = {fileName, contentType , uploadedByteStream , strItemDescr , numAttachId};
    Class[] methodParamTypes = {fileName.getClass(), contentType.getClass() , uploadedByteStream.getClass() , strItemDescr.getClass() , numAttachId.getClass()};
    am.invokeMethod("setUploadFileRowData", methodParams, methodParamTypes);
    am.invokeMethod("apply");
    System.out.println("Records committed in lobs table");
    if (pageContext.getParameter("AddAnother") != null)
    pageContext.forwardImmediatelyToCurrentPage(null,
    true, // retain AM
    OAWebBeanConstants.ADD_BREAD_CRUMB_YES);
    if (pageContext.getParameter("cancel") != null)
    am.invokeMethod("rollbackShipment");
    TransactionUnitHelper.endTransactionUnit(pageContext, "AttachmentCreateTxn");
    Code in AM:
    public void apply(){
    getTransaction().commit();
    public void initAplAttachment() {
    OAViewObject lobsvo = (OAViewObject)getAplLobsAttachVO1();
    if (!lobsvo.isPreparedForExecution())
    lobsvo.executeQuery();
    Row row = lobsvo.createRow();
    lobsvo.insertRow(row);
    row.setNewRowState(Row.STATUS_INITIALIZED);
    public void setUploadFileRowData(String fName, String fContentType, BlobDomain fileData , String fItemDescr , Number fAttachId)
    AplLobsAttachVOImpl VOImpl = (AplLobsAttachVOImpl)getAplLobsAttachVO1();
    System.out.println("In setUploadFileRowData method");
    System.out.println("In setUploadFileRowData method fAttachId: "+fAttachId);
    System.out.println("In setUploadFileRowData method fName: "+fName);
    System.out.println("In setUploadFileRowData method fContentType: "+fContentType);
    RowSetIterator rowIter = VOImpl.createRowSetIterator("rowIter");
    while (rowIter.hasNext())
    AplLobsAttachVORowImpl viewRow = (AplLobsAttachVORowImpl)rowIter.next();
    viewRow.setFileContentType(fContentType);
    viewRow.setFileData(fileData);
    viewRow.setFileFormat("IGNORE");
    viewRow.setFileName(fName);
    rowIter.closeRowSetIterator();
    System.out.println("setting on fndlobs done");
    The attchemnt id is the sequence generated number, and its defaulting logic is written in EO
    public void create(AttributeList attributeList) {
    super.create(attributeList);
    OADBTransaction transaction = getOADBTransaction();
    Number attachmentId = transaction.getSequenceValue("xxapl_po_ship_attch_s");
    setAttachmentId(attachmentId);
    public void setAttachmentId(Number value) {
    System.out.println("In ShipmentsEOImpl value::"+value);
    if (getAttachmentId() != null)
    System.out.println("In AplLobsAttachEOImpl AttachmentId::"+(Number)getAttachmentId());
    throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(), // EO name
    getPrimaryKey(), // EO PK
    "AttachmentId", // Attribute Name
    value, // Attribute value
    "AK", // Message product short name
    "FWK_TBX_T_EMP_ID_NO_UPDATE"); // Message name
    if (value != null)
    // Attachment ID must be unique. To verify this, you must check both the
    // entity cache and the database. In this case, it's appropriate
    // to use findByPrimaryKey() because you're unlikely to get a match, and
    // and are therefore unlikely to pull a bunch of large objects into memory.
    // Note that findByPrimaryKey() is guaranteed to check all AplLobsAttachment.
    // First it checks the entity cache, then it checks the database.
    OADBTransaction transaction = getOADBTransaction();
    Object[] attachmentKey = {value};
    EntityDefImpl attachDefinition = AplLobsAttachEOImpl.getDefinitionObject();
    AplLobsAttachEOImpl attachment =
    (AplLobsAttachEOImpl)attachDefinition.findByPrimaryKey(transaction, new Key(attachmentKey));
    if (attachment != null)
    throw new OAAttrValException(OAException.TYP_ENTITY_OBJECT,
    getEntityDef().getFullName(), // EO name
    getPrimaryKey(), // EO PK
    "AttachmentId", // Attribute Name
    value, // Attribute value
    "AK", // Message product short name
    "FWK_TBX_T_EMP_ID_UNIQUE"); // Message name
    setAttributeInternal(ATTACHMENTID, value);
    Issue faced:
    When i run the page for the first time data gets inserted into custom table perfectly on clicking upload button,
    but when clicked on add another button on the same page (which basically redirects to the same upload page and increments the attachment id by 1)
    i am getting the below error:
    Error
    Unable to perform transaction on the record.
    Cause: The record contains stale data. The record has been modified by another user.
    Action: Cancel the transaction and re-query the record to get the new data.
    Have spent entire day to resolve this issue but no luck.
    Any help on this will be appreciated, let me know if i am going wrong anywhere.
    Thanks nd Regards
    Avinash

    Hi,
    After, inserting the values please re-execute the VO query.
    Also, try to redirect the page with no AM retension
    Thanks,
    Gaurav

  • Infotype to link Personnel Number with Vendor Master Record

    Hi All,
    pls help me what infotype to link Personnel Number with Vendor Master Record in AP module.
    Customer require when Create Employee then system automatic gernerate Vendor Master.
    pls help us.
    Thanks

    Hi,
    PRAA works absolutely fine. I have external number range for vendors.
    There is an option so that you can create HR accounts in vendor accounts
    Path:Accounting> Financial accounting>Travel
    management>Personal-Related Master Data> Create Vendors (tr.code PRAA)
    Look at this description:
    From the HR master data, this program creates a batch input session for
    maintaining person-specific vendor master records.
    The report can process vendor master records as follows:
    o Initial set up (create/add missing company code segment)
    o Uupdate (change completely according to vendor and HR reference)
    o Change (only according to HR master data, for example, name
    Block
    The following infotypes from HR master data are taken into account for
    the key date specified:
    o Actions (Infotype 0000)
    o Organizational Assignment (Infotype 0001)
    o Personal Data (Infotype 0002)
    o Permanent Residence (Infotype 0006 Subtype 1)
    o Bank Details (Infotype 0009 Subtype 0 or 2)
    All other required data is taken from a reference vendor that you can
    specify as a parameter when you start the program. Note that for the
    reference vendor, reference data must exist for all company codes that
    are applicable according to HR master data. Since program RPRAPA00 cannot
    assign vendor master record numbers externally when creating
    vendor master records, you must be sure that the reference vendor is
    assigned to an account group that requires internal number assignment.
    If no house bank is specified in the reference vendor, this is
    determined via feature TRVHB (Implementation guide for 'Travel Expenses'
    -> Transfer to Data Medium Exchange -> Set up Feature for Determining
    House Bank according to the employee's organizational situation.
    The link between the HR employee's master record and the corresponding
    vendor master record is set up by entering the personnel number in the
    company code segment of the vendor master record.
    General program flow
    Program RPRAPA00 creates a sequential work file in the specified
    directory. Ensure that you use a global directory that can be accessed
    by all application servers; a directory is automatically proposed. The
    work file is later transformed into a batch input session by the
    subsequent program RFBIKR00, which is started automatically.
    In test run mode this program delivers information messages which
    document the creation of a batch input session. Since each information
    message must be confirmed manually, the test run should only be
    performed for a small number of personnel numbers.
    In productive run mode, however, the program RFBIKR00 is automatically
    started as a job; the information mentioned above is stored in the job
    log in this case.
    Initial setup of vendor master records
    Corresponding vendor master records are created from the infotypes
    listed above and the non-HR-specific information from the reference
    vendor. During the process, the personnel number is stored in the
    company code segment of the vendor master record. Persons who already
    have a vendor master record with corresponding personnel number are
    rejected in this run.
    If a vendor master record already exists for a personnel number, but
    without a company code segment for the company code in which the
    employee exists at the key date entered, the program adds the required
    company code segment to the vendor master record. You must however
    activate the checkbox "Create new company code segment for existing
    vendor master records".
    The HR master data used is the info type record valid on the key date
    entered.
    Update of vendor master records
    The corresponding vendor master records are updated from the infotypes
    listed above and the non-HR-specific information from the reference
    vendor. Persons who already have a vendor master record with
    corresponding personnel number are rejected in this run. The system uses
    HR master data from the infotype records valid for the key date.
    Change vendor master records according to HR master data
    All HR-specific data is changed in the vendor master records for
    personnel numbers that have changes since the date recorded under '...
    with last change since'. The system uses HR master data again from the
    infotype records valid for the key date specified.
    Block vendor master records
    All vendor master records which correspond with the selected personnel
    numbers are blocked for posting.
    User exits
    For the following situations, two empty routines are supplied in the
    includes RPRAPAEX and RPRAPAEX&H5F001 which you can adapt to your
    requirements.
    o The employee's last name is entered in the sort field in the vendor
    master record. If you do not want this to happen, you can program
    the situation you want in the user exit routine
    'set&H5Fmc&H5Ffield&H5Fby&H5Fuser'.
    o If the address editing in the vendor master record does not suit
    your requirements, it can also be changed. For this purpose you have
    the user exit routine 'set&H5Faddress&H5Fby&H5Fuser'.
    o If you want to assign numbers to the vendor master records to be
    created via external number assignment (for example, vendor master
    record number &H3D personnel number), you can adjust the user exit
    routing 'SET&H5FVENDOR&H5FNO&H5FBY&H5FUSER' (in include RPRAPAEX&H5F00). Note
    that
    the account group of the reference vendor must also have external
    number assignment.
    Note that in each case a thorough knowledge of the programming
    language
    ABAP/4 is necessary.
    Set up vendor master records in a separate system
    The distribution of personnel master data from a HR system to an FI
    system is a prerequisite for creating, changing, and blocking vendor
    master records.
    In the section Master Data Distribution (link to separate HR unit) of
    the ALE IMG, an explanation is given regarding the way in which the
    FI
    System is supplied with the relevant employee data from the HR System
    In the FI System, report RPRAPA00 described here can be used to
    create,
    change, and block vendor master records.
    Other wise try with VPE1 t.code.
    Regards
    Devi

  • Planfunction in IP or with BW modelling - case with 15 million records

    Hi,
    we need to implement a simple planfunction (qty * price) which has to be executed for 15 million records at a time (qty of 15 million records multiplied with average price calculated on a higher level). I'd like to still implement this with a simple FOX formula but are fearing the performance, given the number of records. Does anyone has experience with this number of records. Would you suggest to do this within IP or using BW modelling. The maximum lead time accepted is 24 hours for this planfunction ...
    The planfunction is expected to be executed in a batch or background mode, but should be triggered from an IP input query and not using RSPC for example...
    please advise.
    D

    Hi Dries,
    using BI IP you should definitely do a partition via planning sequence in a process chain, cf.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/946677f8fb0cf2e10000000a114a6b/frameset.htm
    Planning functions load the requested data into main memory, with 15 million records you will have a problem. In addition it is not a good idea to emply only one work process with the whole work (a planning function uses only one work process). So partition the problem to be able to use parallelization.
    Process chains can be triggered via an API, cf. function group RSPC_API. So you can easily start a process chain via a planning function.
    Regards,
    Gregor

  • B1 and SQL Server 2005 performance with 3.000.000 invoice-lines per year

    I would like to know if SAP Business One with SQL Server 2005 could work with the following information:
    - 40.000 Business Partners
    - 40.000 Invoices per month
    - Aprox. 3.000.000 invoice-lines per year
    Of course it will be necessary to change some forms in B1.
    What do you think?
    Do you know any B1 customer working with that amout of data?

    > Hi,
    >
    > I think a good SQL2005 tuning (done by a good DBA)
    > will improve performance. Number of records like that
    > shouldn't hurt that kind of DB engine...
    Hi,
    I'm sure that MSSQL2005 can handle the amount of records & transactions in question. Even MSSQL 2000 can do it. However, any DB engine can be put on its knees with the combination of 2-tier application architecture and badly designed queries. B1 is a case in point. I wouldn't go into such a project without decent preliminary load testing and explicit commitment for support from SAP B1 dev team.
    I have heard from implementation projects where B1 simply couldn't handle the amount of data. I've also participated in some presales cases for B1 where we decided not to take a project because we saw that B1 couldn't handle the amount of data (while the other features of B1 would have been more than enough for the customer). The one you're currently looking at seems like one of those.
    Henry

  • Can I use Solaris with my Home Recording Studio?

    Hello,
    I am currently building a basic home recording studio. I recently assembled a new computer and will be purchasing music recording equipment very soon. Before I purchase the audio recording gear, I was curious about a few things pertaining to the use of Solaris as my possible choice system.
    The data that I store on my hard drives will need to be secure. I had originally considered RAID 5 though recently found out about RAID Z, which appears to be far ahead of the other RAIDs. I am curious to know more about how to set up a RAID Z system.
    The computer will be running Solaris, various Linux or BSD, and Windows Vista. I have 500GB x4 hard drives. I am considering separating one and using it strictly for Vista, and using the remaining 3 with a RAID set-up for nix. If I had a choice, I would put much less hard drive space into Vista (capping it at 150 to 200 GB), though I have yet to come across a RAID system that effectively or efficiently cooperates cross-platform. With that in mind, the majority of the remaining 1.5TB can go into my studio OS - hopefully Solaris - while left-over hard drive space is set aside for various other *nix.
    I've used Solaris 10 before, though haven't really gone into detail on the mechanics of such features as ZFS, zones, now RAID Z, and the many more amazing features that Solaris provides. Likewise, I am not really familiar with how well the system might operate with software/applications that I could use for recording and composing music.
    Beyond the available 2TB (or 1.5TB RAID-ready) space, my computer is running on a new 64-bit quad-core AMD CPU (~2.6GHz) and currently has 4GB 1066MHz RAM. In my search for preparing a computer that suits me for high performance recording - including the editing and mastering of audio - I am attempting to narrow down my options on the software side of things. I have been very satisfied with the general file management of Solaris, though am unsure of how well it might run as the software side of a home recording studio. An additional factor in deciding will be the software's compatbility with audio PCI cards and external recording/audio interfaces. There is always a question of whether or not the necessary drivers, firmware, or similar is available between any given hardware and software components.
    Some other *nix options I have considered - "Ubuntu Studio 64-bit" (though it appears to prefer 32-bit) and "64 Studio" (also Debian-based, though appears to be optimized for 64-bit). Some colleagues have used Gentoo with studio applications, though I don't know of any Linux distributions that offer the versatility, power, stability, and efficiency of Solaris for the operating system that it is.
    If I use something like "64 Studio", I will have the advantage of a system that is ready-built for studio recording purposes as well as a system that I know is compatible with the sound gear that I am considering purchasing. However, those systems are restricted by their file system types, RAID availability and dependability, and the general disadvantages of using the Linux kernel in general. (I enjoy Linux, though I admittedly prefer Unix over it.)
    Can I use Solaris with my home recording studio?
    Thank you!
    Edited by: the_professor on Nov 9, 2008 6:26 PM
    Corrected a typo.
    Edited by: the_professor on Nov 9, 2008 6:30 PM
    Caught another typo.

    I'm not sure what your question is. If you have particular software you want to run, you should see what you can get that package to run under. Sound hw stuff don't tend to be too portable unless it's written on top of OSS (htttp://www.opensound.com). Were I you, I'd consider separating out the storage part and the processing part. Get something like a cheap dell sc440 to be a file server and give the drives to that (unless you actually can get all the components working under solaris).
    -r

  • How to synchronize video camera with logic for recording?

    Hi guys,
    I have a Sony VX 2000 and logic pro.
    I'd like to do a multitrack audio recording of a musical theatre performance and simultaneously record video with two cameras.
    I want to record the sound into logic pro, that way I can mix the sound later (multitrack audio).
    At the same time I want to record video with the cameras.
    There may be 100 takes or more....
    What is the best way to synchronize logic with the video recording?
    I don't want to synchronize each audio take with each video take later, there must be an easiser way!
    Is it possible to start recording of video and audio at exact the same time?
    Do I need any additional hardware?
    Any feedback highly appreciated!
    Thanks Eddie

    Have the drummer make a 2 pop (click) at the beginning of every take.
    Set the camera to 16 bit audio and you can import the camera audio into Logic.
    Record multitrack into Logic at the same time.
    You can align the takes to the audible click.
    Sorry, It's the only thing I can think of,
    without having very expensive externally synced cameras.
    Logic doesn't record video, it plays it back.
    So making some type of slate is the way to go.
    It will be a pain to manually align all that stuff.

  • Initial loading with less added record in a cube

    Hi Everybody,
    I have loaded a Initial delta into a cube. in  that cube there are more than one lakh entries are transfered, but only 30,000 records are added, i donot know why it happened? how can i solved? help me...

    Hi Jaya,
               It is not a problem with no of records
    as cube is aggregates the key figure value.
    match the infocube data with PSA........if u find any missing records then debug the routines........

  • Query Performance with Exception aggregation

    Hello,
    My Query Keyfigures has exception aggregation on order line level as per requirement.
    Currently cube holds 5M of records, when we run query its running more than 30min.
    We cont remove exception aggregation.
    Cube is alredy modeled correctly and we dont want to use the cache.
    Does anybody can please advice if there is any other better approach to improve query performance with exception agg?
    Thanks

    Hi,
    We have the same problem and raised an OSS ticket. They replied us with the note 1257455 which offers all ways of improving performance in such cases. I guess there s nothing else to do, but to precalculate this exception aggregated formula in data model via transformations or ABAP.
    By the way, cache can not help you in this case since exc. agg. is calculated after cache retrieval.
    Hope this helps,
    Sunil

  • Stage Performance with 2408 audio interface

    Stage Performance with 2408 audio interface
    a)I need to set a playlist with my 80 projects in front of me, ready to play and with a minimum loading time every time I select a new song to perform.
    Each of my logic project has about 30 audio instrument tracks and 10 audio instrument tracks with many plug in.
    I consider buying On Stage but the software only takes 20 projects at the time.
    Is it possible to program more than one on stage software in the same computer and to program my 80 Logic projects?
    I don't want to learn how to use Ablaton live, now I am finally comfortable with Logic.
    b)About the loading time and the computer CPU:
    1) what would be the best deal to avoid the loading time between the songs: I can bounce my 50 tracks to a 4 stereo audio tracks, create a new project with these 4 audio instrument track to make faster or
    2) I can just leave the project with the original tracks but even with my Mac book Pro 2.4 GHZ, the computer crashes sometime because too many effects and tracks. (2 GB ram)
    Please if you have an idea, I would appreciate a lot!!
    Thank you in advance.
    John

    An external audio interface will not make a significant difference to the CPU strain. All it might do is improve the quality of your output signal. It will also not improve (or alter) the intrinsic audio quality of a bounce, unless there are external hardware synths/modules involved.
    In the best case scenario (high end interfaces) the latency might drop a few milliseconds.
    If you do not record audio, you can turn off software monitoring.
    I also have the Lush, it's a lovely synth, but also a CPU-hog that can pretty easily overload the one CPU-core it is processed on.
    What often helps with these synths is to turn off any FX (delay, chorus reverb, etc) that are built into the synth, and use Logics plugins as inserts on the channel strip. Also, turning  off FX on synths reveals the actual ("virgin") honest sound of the synth itself.

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

Maybe you are looking for

  • My apple I'd is alway getting disable why is that?

    When I tried to update all my apps it sayin that my apple I'd is disable. Why is that

  • Firefox will not open and crashes after my lastest Windows 7 update on April 21st, 2015

    After I completed the latest Windows 7 Update I have not been able to access Firefox. Upon opening the program I am given "Firefox has stopped working" message as the image shows. Never had any issues with Firefox before. I have Uninstalled and Reins

  • Text in Transaction FBL5N gets overwritten

    Hello expert, When text has been entered in field BSEG-SGTXT in transaction FBL5N, this text is overwritten with the reason code description when I enter a reason code. I entered the reason code in transaction FBL5N- Additional data - field BSEG-RSTG

  • I'm a newbee to DVD burning.

    I have 174 pictures in iPhoto (2/0/1) and would like to burn them to DVD and have each picture run for about 10 sec. What do I do first and then next & ect. to get this done? I have iDVD 3.0.1 and iMovie 3/0/3 and a supperdrive plus about 60G free me

  • Sending sms from server

    Well i have to send short message from webserver to any client.But not from client to client.Like any products inforamtions to clients.Because i can't find api in any webserver to send sms to client.While from client I have found api in sl45i to send