Does Essbase System 9 Support Parallel Data Loads?

Hi,
Can anyone tell me, whether Essbase System 9 supports Parallel Dataloads or not?
If it supports, How many load rules can be executed parallelly. If any of you know this please tell me. This will be a great help to me.
Thanks a lot..............

Hi Atul Kushwaha,
Are you sure that Essbase System 9 supports parallel dataloads? Because, In the New Features guide of Essbase 11.1.1, they have stated that Essbase 11.1.1 supports parallel data loads and it supports upto 8 rulefiles only.
So please confirm where can I find this information, or please send me the link where I can find this information.
Thank You.

Similar Messages

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • DIRECT MODE에서의 PARALLEL DATA LOADING

    제품 : ORACLE SERVER
    작성날짜 : 1999-08-10
    Direct mode에서의 parallel Data Loading
    =======================================
    SQL*Loader는 동일 table에 대한 direct mode에서의 parallel data load를
    지원하고 있다. 이는 여러 session에서 동시에 데이타를 direct mode로 올림으로써
    대용량의 데이타의 로드 속도를 향상시킬 수 있다. 특히, data file을 물리적으로
    다른 disk에 위치시킴으로써 더욱 큰 효과를 낼 수 있다.
    1. 제약사항
    - index가 없는 table에만 로드 가능
    - APPEND mode에서만 가능. (replace, truncate, insert mode는 지원 안됨)
    - parallel query option이 설치되어 있어야 함.
    2. 사용방법
    각각의 data file을 load할 control file들을 생성한 후, 차례차례 수행하면 됨.
    $sqlldr scott/tiger control=load1.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load2.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load3.ctl direct=true parallel=true
    3. constraint
    - enable parameter를 사용하면 데이타 로드 작업이 모두 끝난 후, 자동으로
    constraint을 enable시켜 준다. 그러나 종종 enable되지 못하는 경우가
    있으므로 반드시 status를 확인해야 한다.
    - primary key 나 unique key constraint이 걸려 있는 경우, 데이타 로드 후
    자동으로 enable할 때, index를 생성하느라 시간이 많이 소모될 수 있다.
    따라서 data만 parallel direct 모드로 로드 한 후, index를 따로 parallel로
    생성하는 것이 성능 측면에서 바람직하다.
    4. storage 할당 방법 및 주의사항
    direct로 데이타를 로드하는 경우 다음 절차를 따라 작업한다.
    - 대상 table의 storage 절에 기초해 temporary segment를 생성한다.
    - 마지막 데이타 로드 작업이 끝난 후, 마지막에 할당되었던 extent의 비어 있는
    즉, 사용하지 않은 부분을 trim 한다.
    - temporary segment에 해당되어 있는 extent들의 header 정보를
    변경하고, HWM 정보를 수정하여, 대상 table에 extent가 소속되도록 한다.
    이러한 extent 할당 방법은 다음과 같은 문제를 야기시킨다.
    - parallel data load에서는 table 생성 시 할당된 최초 INITIAL extent를
    사용하지 않는다.
    - 정상적인 extent 할당 rule을 따르지 않고, 각 process는 next extent에
    정의된 크기를 할당하여 data load를 시작하고, 새로운 extent가 요구될
    때에는 pctincrease 값을 기준으로 할당되게 되는데, 이는 process 간에
    독립적으로 계산되어진다.
    - fragmentation이 심하게 발생할 수 있다.
    fragmentation을 줄이고, storage 할당을 효율적으로 하기 위해서는
    - INITIAL을 2-5 block 정도로 작게 하여 table을 생성한다.
    - 7.2 이상 버젼에서는 options 절에서 storage parameter를 지정하여
    사용한다. 이 때 initial과 next를 동일한 크기로 주는 것이 바람직하다.
    OPTIONS (STORAGE=(MINEXTENTS n
    MAXEXTENTS n
    INITIAL n K
    NEXT n K
    PCTINCREASE n))
    - options 절을 control file에 기술하는 경우 반드시 insert into tables
    절 다음에 기술해야 한다.

    First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
    As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
    - InfoPackage 1, Students 1 - 10.000
    - InfoPackage 2, Students 10.001 - 20.000
    ...and so on.
    Following that I need to create a Process Chain that starts loading all packages at the same point in time.
    Now...when the extractor is called, there are two parts that it runs through:
    - Initialization of the extractor
    - Fetching of records
    ( via flag i_initflag in the extractor ).
    In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
    What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
    Jeroen

  • Parallel data loading

    Hi,
    I am in need of some help. I am currently designing a new extractor for transactional data that needs to be able to handle a high volume (> 1 mil.) of records.
    We have a function module that already fetches the records in the desired structure. So...I could use this FM as the extractor.
    However. This FM is not the most performant. For this reason we have a prefetch FM that, based on the selection criteria, pre-fetches the data and puts it in a buffer. The first FM I mentioned then reads from the buffer instead of the DB.
    So, I would need to call this pre-fetch FM once during the initialization and at the record fetching I would use the other FM...right?
    Now...I saw that I can set-up the BW system smart enough so that it will load data in parallel.
    Imagine I create an InfoPackage in which I define as selection options that students 1 - 100.000 need to be loaded. I start the data loading. What selection criteria are passed to the extractor. 1 - 100.000 right?
    If 3 parallel threads are started, the initialization is done by the only 1 request right?
    The problem I am then facing is that the buffering might take X minutes and the buffer is bypassed as the needed records are not in there yet.
    I am not sure how to do this properly. Can anyone advise?
    Thanks.
    Jeroen

    First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
    As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
    - InfoPackage 1, Students 1 - 10.000
    - InfoPackage 2, Students 10.001 - 20.000
    ...and so on.
    Following that I need to create a Process Chain that starts loading all packages at the same point in time.
    Now...when the extractor is called, there are two parts that it runs through:
    - Initialization of the extractor
    - Fetching of records
    ( via flag i_initflag in the extractor ).
    In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
    What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
    Jeroen

  • How does ECC 6.0 support Dynamic Data Exchange?

    Hello Friends,
      I am looking for material on how ECC 6.0 system supports Dynamic Data Exchange. Any leads would be great benefit to me.
    Thanks for Help.
    With Regards
    Vasu

    Hello friends,
    Any lead on this would be great help to me. Thanks for the Help.
    I know that SAP does support Dynamic Data Exchange. I have read in a book. But looking for how SAP supports DDE.
    Thanks for Help.
    With Regards

  • Does oracle 10.1 support transparent data encryption?

    hi,
    does oracle Release 10.1.0.3.0 support transparent data encryption?
    if not, what can i use instead?
    thanks

    According to http://download-uk.oracle.com/docs/cd/B14117_01/network.101/b10772/asoconfg.htm ,
    data encryption is supported for Oracle Net services in release 10.1.

  • After doing a system restore, Firefox won't load pages.

    I restored my laptop a couple of days ago (from a previous date, not a full system restore), now firefox will open to the home page (google), but will not load any other page. It doesn't even bring up the "page won't load" page, it just continually says 'connecting' forever. I also noticed, even when I first open firefox, and it opens the google homepage, it doesn't have google.com in the address bar, it has 'about:home'. Internet explorer seems to be working fine, and my internet connection is fine, so I am not sure what the problem seems to be.

    hello rlyslowff, one thing you could try is to remove all program rules for firefox from your firewall/security software and let it detect the program again...
    please also [https://www.mozilla.org/en-US/plugincheck/ update your plugins] (some of them are out-dated & have security vulnerabilities that are actively exploited on the web).
    in case this didn't solve the issue, go to the firefox ''menu ≡ > help ? > troubleshooting information'', copy the contents of that page and paste them here into a reply on the forum, as this might give us further clues what is going on...

  • Does WCF-OracleDB adapter support XMLType data for table select operation

    I am getting this error When I do the consume adapter service for Oracle table select operation on one of the tables which has XMLType column. It's working fine for other tables. 
    Microsoft.ServiceModel.Channels.Common.MetadataException: Retrieval of Operation Metadata has failed while building WSDL at 'http://Microsoft.LobServices.OracleDB/2007/03/XXXX/Table/table_name/Select' ---> Microsoft.ServiceModel.Channels.Common.MetadataException:
    Incorrect Type: XMLTYPE. Possible causes: 1. Permission issue 2. Unsupported type.
       at Microsoft.Adapters.OracleDB.OracleCommonMetadataResolverHandler.ResolveTypeMetadata(String nodeId, TimeSpan timeout, TypeMetadataCollection& extraTypeMetadataResolved)
       at Microsoft.ServiceModel.Channels.Common.Design.MetadataCache.GetTypeMetadata(String uniqueId, Guid clientId, TimeSpan timeout)
       at Microsoft.ServiceModel.Channels.Common.MetadataLookup.GetTypeDefinition(String typeId, TimeSpan timeout)
       at Microsoft.Adapters.OracleDB.OracleCommonMetadataResolverHandler.ResolveTypeMetadata(String nodeId, TimeSpan timeout, TypeMetadataCollection& extraTypeMetadataResolved)
       at Microsoft.ServiceModel.Channels.Common.Design.MetadataCache.GetTypeMetadata(String uniqueId, Guid clientId, TimeSpan timeout)
       at Microsoft.ServiceModel.Channels.Common.MetadataLookup.GetTypeDefinition(String typeId, TimeSpan timeout)
       at Microsoft.Adapters.OracleDB.OracleCommonMetadataResolverHandler.ResolveOperationMetadata(String operationId, TimeSpan timeout, TypeMetadataCollection& extraTypeMetadataResolved)
       at Microsoft.ServiceModel.Channels.Common.Design.MetadataCache.GetOperationMetadata(String uniqueId, Guid clientId, TimeSpan timeout)
       at Microsoft.ServiceModel.Channels.Common.Design.WsdlBuilder.SearchBrowseNodes(MetadataRetrievalNode[] nodes, WsdlBuilderHelper helper, TimeoutHelper timeoutHelper)
       --- End of inner exception stack trace ---
    Server stack trace: 
       at Microsoft.ServiceModel.Channels.Common.Design.AdapterExceptions.ThrowMetadataException(String errorMessage, Object arg, Object source, Exception innerException)
       at Microsoft.ServiceModel.Channels.Common.Design.WsdlBuilder.SearchBrowseNodes(MetadataRetrievalNode[] nodes, WsdlBuilderHelper helper, TimeoutHelper timeoutHelper)
       at Microsoft.ServiceModel.Channels.Common.Design.WsdlBuilder.GenerateOperationSchemas(WsdlBuilderHelper helper, MetadataRetrievalNode[] nodes, TimeSpan timeout)
       at Microsoft.ServiceModel.Channels.Common.Design.WsdlBuilder.GetWsdl(MetadataRetrievalNode[] nodes, Uri uri, TimeSpan timeout)
       at Microsoft.Adapters.OracleCommon.OracleCommonWsdlRetrieval.Microsoft.ServiceModel.Channels.Common.IWsdlRetrieval.GetWsdl(MetadataRetrievalNode[] nodes, Uri uri, TimeSpan timeout)
       at Microsoft.ServiceModel.Channels.Common.Design.MetadataExchanger.ProcessMetadataGet(Message message, Uri target, TimeSpan timeout, MetadataLookup metadataLookup)
       at Microsoft.ServiceModel.Channels.Common.Design.MetadataExchanger.ProcessMetadataMessage(Message message, Uri target, TimeSpan timeout, MetadataLookup metadataLookup, Message& replyMessage)
       at Microsoft.ServiceModel.Channels.Common.Channels.AdapterRequestChannel.Request(Message message, TimeSpan timeout)
       at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
       at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
       at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs)
       at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
       at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
    Exception rethrown at [0]: 
       at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
       at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
       at Microsoft.ServiceModel.Channels.IMetadataRetrievalContract.GetMetadata(MetadataRetrievalNode[] nodes)
       at Microsoft.ServiceModel.Channels.Tools.MetadataSearchBrowse.MetadataPanel.GetWsdl(MetadataRetrievalNode[] nodes)
       at Microsoft.ServiceModel.Channels.Tools.MetadataSearchBrowse.MetadataPanel.btnProperties_Click(Object sender, EventArgs 
    My table has XMLType column which is also included for select.
    Thanks

    Hi Van&boatseller,
    The duplicate thread has been deleted, and thanks for your feedback.
    Best regards
    Angie Xu
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Why does action script not support constructor over loading?

    This would make life so much simpler? is there an official reason or is it "were too lazy"

    Don't know how anyone could answer the why here, unless they're an Adobe engineer. You can simulate overloading with an arguments array.
    public function doSomething (myObject:Object = null, ...rest)
    So you could do something like
    public function savePersonalListItems (personName:String, ...rest) and then act upon the array of parameters that would be passed into the rest array. http://www.sephiroth.it/weblog/archives/2006/06/actionscript_3_rest_parameter.php
    If you want to overload with known entities, it's probably just as good to go with default values so they parameters become optional
    public function saveTopThreeItems( object1:Object = new Object(), object2:Object = new Object(), object3:Objects = new Object())
    then you can pass in none or all three. Just some ideas that you may already implement.

  • Data Load

    We are trying to load the data of 2lis_03_bf from sap R/3 into SAP BW.
    The following steps were followed in the process.
    1.Delete data from Inventory Queue LBWQ MCEX03 Entries
    2.Delete setup tables LBWG
    3.Check data in Extractor RSA3 0 records should be there
    4. Filling setup tables for 2LIS_03_BX MCNB Termination date = next
    day, Transfer structure = 2LIS_03_BX,Only Valuated stock( with posting
    block ) on 14th august
    5.Filling setup tables for 2LIS_03_BF OLI1BW The data restriction
    given by posting date 01.01.1999-14.08.2007
    6.Generate Initial status 2LIS_03_BX. RSA1 ( BW) Done with in the
    posting block
    7.Collapse data with marker update
    8.Start/Schedule Control Job in R/3 side for BF to run every 2hrs LBWE,
    as suggested by External Consultant.
    9.Initialize delta process for 2LIS_03_BF RSA1 Started on 15th of
    august but failed due termination in R/3.
    10.So started full update in two parallel data load into BW, 3months at
    a time. Each load took 2 days to bring 2 million records.
    11.This load of data till 14th of August 2007 finished on 4th Sep 2007.
    12.INITIAL LOAD WITHOUT DATA TRANSFER IS DONE Successfully ( to
    activate delta for BW)
    13.Delta to BW was scheduled and it transferred 0 from 0 records.
    14. Check for data is R/3 delta Q : RSA7 Data records are shown from
    01.09.2007 – 04.09.2007. Unable to find data from 15.08.2007 to
    31.08.2007.
    15.Performed a full data load from 15.08.2007 till date ( in order to
    get the data for the missing days) RSA1 0 from 0 records are
    transferred.
    We are looking for any advice in getting this data records from 15th of
    August till today.
    This is a very critical issue, because we are unable to provide our
    business with any production reports and also stock reports.
    Please some one help us to resolve the issue.
    Please help as early as possible

    Hi,
    I have a suggestion, you can try..
    as you said that the, your delta init has failed from 15th aug..but later your init was
    successful ended at today's date..right..
    so, if your init and delta activation is successful, then system would have started
    captureing the data through one of its update mode set by you..
    therefore, first goto RSA7 and check whether you have any delta records there..?
    if you found NO..
    then...go to T-code - LBWQ - and check the entries against 'MCEX03' you should be able to see the no. of records..
    step2 : go there double click on that record and check the value in the status field
    and if it is other than 'Ready', then change the status to 'Ready'...
    and revert back to me...for further steps..

  • Data Load Optimization

    Hi,
    I have a cube with following dimension information and it requires optimization for data load, its data is cleared and loaded every week from SQL data source using load rule. It loads 35 million records and the load is so slow that only for data load excluding calculation takes 10 hrs. Is it common? Is there any change in the structure I need to make the load faster like changing the Measures to sparse or change the position of dimensions. Also the block size is large, 52920 B thats kind of absurd. I have also the cache settings below so please look at it please give me suggestions on this
    MEASURE      Dense     Accounts 245 (No. Of Members)
    PERIOD     Dense     Time 27
    CALC      Sparse     None      1
    SCENARIO     Sparse     None 7
    GEO_NM     Sparse     None     50
    PRODUCT     Sparse     None 8416
    CAMPAIGN     Sparse     None 35
    SEGMENT     Sparse     None 32
    Cache settings :
    Index Cache setting : 1024
    Index Cache Current Value : 1024
    Data File Cache Setting : 32768
    Data file Cache Current Value : 0
    Data Cache Setting : 3072
    Data Cache Current Value : 3049
    I would appreciate any help on this. Thanks!

    10 hrs is not acceptable even for that many rows. For my discussion, I'll assume a BSO cube,
    There are a few things to consider
    First what is the order of the columns in your load rule? Can you post the SQL? IS the sql sorted as it comes in? Optimal for a load would be to have your sparse dimensions first followed by the dense dimensions(preferably having one of the dense dimensiosn as columns instead of rows) For example your periods going across like Jan, Feb, Mar, etc
    Second, Do you have parallel data loading turned on? Look in the config for Dlthreadsprepare and DLthreadswrite. My multithreading you can get better throughput
    Third, how does the data get loaded? Is there any summation of data before being loaded or do you have the load rule set to addative. doing the summation in SQL would spead things up a lot since each block would only get hit once.
    I have also seen network issues cause this as transferring this many rows would be slow ( as KRishna said) and have seen where the number of joins done on the SQL caused massive delays in preparing the data. Out of interest, how long does the actual query take if you are just executing it from a SQL tool.

  • Reg "Allow Bulk Data Load"

    Hi all,
    GoodMorning,.
    what exactly does the option of "Allow Bulk Data Load" option on Company Profile page do, it is clear in doc. that it allows crm on demand consultants to load bulk data. But i am not clear on how they load etc etc, do they use anyother tools other than that admin. uses for data uploading.
    any real time implementation example using this option would be appreciated.
    Regards,
    Sreekanth.

    The Bulk Data Load utility is a utility similar to the Import Utility that On Demand Professional Services can use for import. The Bulk Data Load utility is accessed from a separate URL and once a company has allowed bulk data load then we would be able to use the Bulk Data Load Utility for importing their data.
    The Bulk Data Load uses similar method to the Import Utility for importing data with the difference being that the number of records per import is higher and you can queue multiple import jobs.

  • Cache Settings - Data Load

    Hello All,
    Do we have to set cache while performing data laod?
    Defragmentation - No setting of cache
    Calculation - Set cahce to reduce the calculation time (Max 2 GB- Index Cache + Data Cache)
    Data Load - ???
    Amarnath

    Hi Amarnath,
    There are some configuration settings that can effect dataload performance which are:-
    1- DLTHREADSPREPARE - Specifies how many threads Essbase may use during the data load stage that codifies and organizes the data in preparation to being written to blocks in memory.
    2- DLTHREADSWRITE - Specifies how many threads Essbase may use during the data load stage that writes data to the disk. High values may require allocation of additional cache.
    3- DLSINGLETHREADPERSTAGE - Specifies that Essbase use a single thread per stage, ignoring the values in the DLTHREADSPREPARE and DLTHREADSWRITE settings.
    If you set high value for 2nd setting then you need to increase cache size too.
    Hope it answere your question.
    Regards,
    Atul Kushwaha

  • CFINPUT Type=DateField... support international dates?

    I can't seem to find it in the docs but does the HTML
    Datefield support international date names (e.g. French). I can set
    the Local and display it properly for the LSDateFormat but don't
    know if it is supported with the CFINPUT Type=DateField.

    for instance:
    <cfscript>
    // thai locale
    thai=createObject("java","java.util.Locale").init("th","TH");
    // get date format symbols for this locale
    dfs=createObject("java","java.text.DateFormatSymbols").init(thai);
    // localized month names
    months=dfs.getMonths();
    // localized short month names
    shortMonths=dfs.getShortMonths();
    // localized days of week
    dow=dfs.getWeekdays();
    // localized short days of week
    shortDOW=dfs.getShortWeekdays();
    </cfscript>
    <cfdump var="#months#" label="months">
    <br/>
    <cfdump var="#shortMonths#" label="short months">
    <br/>
    <cfdump var="#dow#" label="days of week">
    <br/>
    <cfdump var="#shortDOW#" label="short days of week">

  • Hanging data load

    We have a data load from a cube into an ODS and then into a new cube. There's no problem with small number of records, and there's no problem with loading into the ODS. But when loading around 250 000 records from the ODS to the cube the loading process seems to hang. Three of four packaged are ok, but the fourth (actually the third) never updates the cube.
    What reasons can there be for a loading process to hang?
    We suspect it may have to do with the following: In the start routine of the update rules, the number of records are doubled by using this code:
    DATA: wa_temp TYPE DATA_PACKAGE_STRUCTURE OCCURS 0 WITH HEADER LINE.
    LOOP AT DATA_PACKAGE INTO wa_temp.
       wa_temp-/BIC/ZOSHARE = 'N'.
       APPEND wa_temp.
    ENDLOOP.
    LOOP AT wa_temp.
       APPEND wa_temp TO DATA_PACKAGE.
    ENDLOOP.
    May this code create problems for large amounts of records? I assume the new records are all created inside each package. I also assume each package then increases in size. Is there a package size limit that may create problems for us?
    Do we have to split the data load to be able to upload the data? We would really like to avoid it.

    Hi,
    You can goto the infopackage ->Scheduler->DataS. Default Data Transfer. You can change the datapacket size there or you can change it at the system level for all data loads. Ask your BASIS person for this one.
    Yes you can update individual data packets as well. Goto the details tab of the monitor screen and then mark the QM status red (won't allow to update manually unless request is red). Right click on the data packet and select update manually. Once updated you can mark the request green again.
    I think huse = huge :)must be a typo.
    Cheers,
    Kedar

Maybe you are looking for

  • Virtual PC on Windows 7 (32 bit) and T4300 processor

    Hi, I bought my laptop specifically to run Virtual PC and deliberately got a 4Gb machine. I now find out that the T4300 processor doesn't support hardware assisted virtualisation and seemingly you then can't run VPC? I'm not interested in XP mode - I

  • Standard stock Aging Reports

    Hi All, I am using MC44, MC46 and MC50 for my stock aging reports. But i a m not getting correct figures. Please update me whether we can use these reports anyways.Can you provide the logic behind the calculation of the Aging of these re

  • 5800 lock code

    after firmware update to v21 it's not possible to input alphanumeric lock code, and the update did not reset it to 12345. so, how to change the code if it had letters in it? Solved! Go to Solution.

  • Add UDF to OPOR ,OINV ... via DI API

    Hello... i want to add a User Defined Field to Documents via DI API. Is this possible? And if its possible, please show me a way to do it thx best regards Matthias

  • Regarding space not released after rm commend on root

    hello guys actually i am facing an unexpected problem i have a folder /archivelog which is mounted on ocfs.daily i used to store my export file here.But yesterday i started taking exp from one terminal with file=28june.dmp and after starting it remov