Processing the dimension taking longer time.

Hi,
I have a dimension with table 90 millions of records and have column stored index top of it.
While process the dimension its taking nearly 1 hour 30 mins, how do we improve the performance?

Hi SQL_Gun, 
First of all... are you sure that this is a dimension? 90 millon leaves on a dimension is pretty big (assuming that 90 million cardinality is the final cardinality for the minimum granularity in your dimension). Said that, and assuming that your design is
not improvable by splitting that dimension (with a header / detail schema, maybe?) or transforming it in any way, let's go for query optimization :)
Are you using SQL Server 2012 or 2014? 
SSAS launches MANY queries against the dimension (the more attributes and hierarchies, the more queries), and not all of them have to be supported by the batch mode that can be triggered when querying a columnstore index. 
From the relational point of view: 
- Trace the processing events happening in the relational database with SQL Server Profiler, store the Profiler trace in a table and then query it (or explore it with Power Query in Excel, for example :) ) to discover
the most time-and-resource consuming queries. 
- Then analyze these queries and check out their execution plan to see if SQL Server is using batch mode or not. 
- Narrow all your data types as much as possible, and reduce the amount of text present in the table. Both columnstore indexes and Analysis Services compress much better their data and perform much faster when using integer
numbers and narrower data types in general. 
From the SSAS point of view: 
- Check out if there are non-used or irrelevant attributes, or attributes that are present in the dimension but that are not meant to be used in aggregations or filters, just like Full Address, Telephone Number, Surname,
etc.
- All of them should be Properties of other attributes (such as Customer Id -using Customer Name as NameColumn- ). To achieve that, make them hang from the attribute they depend on in the Attribute Relationship window
and set to False these Properties: 
- AttributeHierarchyEnabled
- AttributeHierarchyVisible
That will prevent SSAS to have to build indexes for these attributes so it may improve processing performance. 
As a final recommendation, there's a fantastic series of blog posts by Niko Neugebauer, a Portuguese SQL Server MVP about Columnstore Indexes --> http://www.nikoport.com/columnstore/ I really encourage you to read them (so far there are 48 posts
:) ), they are extremely interesting.
Regards
Pau.

Similar Messages

  • Connecting to the database taking long time to connect database server

    Hi
    When I execute procedure i am getting the below message at bottom of the Oracle SQL Developer
    "Connecting to the database"
    it is taking more than 10 min plz guide

    Hi
    have you installed a normal Oracle Client also on your Host? normal Oracle Client
    Did you connect with host:port:sid or with a Oracle Naming Service? through TNS Service
    Can you test tnsping <alias> yes, It is working fine
    Did other user have the same problem? yes
    Did you connect through WAN or LAN connection? LAN (Intranet)
    Can you tell more about you client/database setup?
    Database setup:
    OS: Window 2008 server
    version: 11.1.0
    Client: 11.1.0
    OS: Window 2008 server
    Now I am not able to execute single select query which table contains 6 records and 15 columns it is taking long time I have waited 30 min still no resutls
    only one table is behaving like this remaining is working fine
    Edited by: user9235224 on Oct 6, 2012 7:06 PM

  • Fwrite() and fread() of a shared FAT32 formatted file is taking long time in MAC osx Lion C program

    Hi
    Is there any provision or api in MAC to open a file in shared mode same as windows
       hUSBdrive = CreateFile(pDriveName, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_NO_BUFFERING, NULL);
    we have the follwing scenario where a file is shared among two processes for read/write.one is running on Linux and the other one is running on MAC.where both the processses are reading/writing into the same memory location in the file say "X"
    FAT 32 formatted raw data file which is located on the device, is shared among two processes.
    One process is running on Linux device which is connected to MAC book through usb.In this linux process, the file is opened using fopen() and we have used fcntl() with O_DIRECT flag.This process continuously reads/writes data on memory location "X" in the shared file .
    The other process is running on Mac which has simple c program that opens the file on the connected device i.e from usb drive and reads/writes data using fread()/fwrite().fopen() is used to open the file and FILE_NOCACHE flag is used to avoid caching.
    The value at memory location "X" is updated by mac by using fwrite() and the linux process reads the memory location "X" by using fread(). Linux process is taking around 30 sec to get the updated value.
    If the value is updated by Linux process at memory location "X" by using fwrite() the MAC process is also taking long time more than a minute to read the updated value by usng fread().
    fwrite()/fread() on mac is taking long time where as the windows application which uses the same apis is taking msec time.
    Do we need to use other api s or flags to open file?
    thanks in advance.......

    does any one face this kind of problem?
    fwrite() and fread() takes long time?
    Is there any problem in read/write to a fat32 file on MAC?

  • Sapinst taking long time to load

    Hello experts,
    we are installing sapnw730 PI.
    we have noticed that the SAPinst taking long time to load ( near about 45 mins) for every time.
    Once sapinst starts we dont find any slowness during the installation and the system behaves quite normal.
    On the same host we installed sapnw7.1 BI, that took very less time to load.
    Any clues to reduce the SAPinst load time ? , anyone encounter this error before?
    system  details : SUN Solaris SPARC with 8GB RAM and no other instances running on the host.
    thanks  in advance

    We have enough memory (8 GB Ram)
    we have monitored other process and CPU usage during SAPinst run time.
    our CPU utilization was 50 % only during SAPinst
    Any other clues ??
    Thanks

  • Table valueset taking long time to open the LOV

    Hi,
    We added a table valueset to a concurrent program. The table vaueset showsTransaction number from ra_interface_lines_all table. It is having long list. So we added the partial string entering message before open a long list.But still it is taking long time.
    Please any help on this highly appreciated.
    Thanks,
    Samba

    Hi
    Try to modify the query or creating an index will speed up the process.
    Thanks & regards
    Rajan

  • Process Chain taking long time in loading data in infocube

    Dear Expert,
      We are loading data thru PC in AR cube it takes data frm
    PSA-> DSO->Activation->Index Deletion->DTP(load infocube)->IndexCreation->Create Aggregates.
    In Index creation everyday its taking long time around 9 to 10 hrs to create it
    when we go in RSRV and repair the infocube thr loading of data happens fast. We are doing it(RSRV) everyday. In DB02 we have seen dat 96% tablespace is used.
    Please tell permanent solution.
    Please suggest its BI Issue or Basis.
    Regards,
    Ankit

    Hi ,
    We are loading data thru PC in AR cube it takes data frm
    PSA-> DSO->Activation->Index Deletion->DTP(load infocube)->IndexCreation->Create Aggregates.
    In the above steps insted of Create Aggregates it should be Roll up Process of aggregates.
    You can ask the basis team to check the Table space in the transaction db02old/db02.
    Check if there is long running job in SM66/SM50 kill that job.
    check there should be enough Batch process to perform the steps.
    Hope this helps.
    "Assigning points is the ways to say thanks on SDN".
    Br
    Alok

  • Process chain taking long time to complete

    Hi,
    I am having following issues in the process chain daily loads
    1) The PSA deletion is taking very long time on fridays only (nearly 3 hrs). Other days it gets deleted in 1 hr max.
    2) Just to load 381 records via dtp from DSO to cube takes nearly 2 hrs(Delta load). No major routines are written in the transformation.
    How to analyse a process in process chain which does not fail, but takes very long time to complete. Is there any tool or transaction in BW which can help us to analyse why the process chain takes so long time to complete in different days. One day it completes in 8 hrs, another day it takes 12 hrs.
    None of the transactions, SM37, SM50, SLG1, LOGS etc are giving me any help to analyse this issue.
    If it fails we have error logs to check and analyse, but without failing how we can analyse and fix the delay and reduce the time of loading data. Please guide me.
    Thanks in advance.
    Vishwanath

    Hi.......
    1) This might be due to the poor performance of the system ..there wont be enoughwork processes avilabe in teh system .- What needs to be done if it does not have enough work processes available. Though I can find that lots of dialog processes were free during these jobs as well some Background processes were also free.?
    Look to increase Work Process.......................u can do two things..............U can cancel some Job...........which r not progressing...........or which r not very important for the time being..........
    Or if possible........u can increase number of servers..........
    Look When a Job will start...............first it will run in background.............u can start in in Dialog also..........but for Dialogue after some time ..load will go to time out.................now many Child jobs will be created for a single background jobs..............they may run in Dialogue..........u can monitor them through SM66............
    2) Check ST04 for lock waits and dead locks ..if the same lock persist for long then check with basis team.
    What needs to be conveyed to the basis team when these locks happen?
    generally these will be temporary Locks.............after some time locks will be realeased..............if they persist for a long time...........then u can contact Basis people..........Or if u can understand that which job is creating the lock.............like if u double click on the job >> click on Job details...............from there u will find the PID................copy that.............and check in ST04..........whether they matches or not..............now suppose it match.............and u know that job is not very important............then u can cancel the job.............
    Check also the table space availba in St04 . the usuage should nt be more than 90%
    What needs to be conveyed to the basis team if the usage is more than 90%.?
    U can ask them to increase the Table space........
    Check the sm37 >> delay column it should nt be high
    What needs to be done if delay column is high. Actually it is high for these jobs (PSA deletion and load from DSO to Cube visa DTP (Delta load)?
    If Delay is more.............it means ...........that there is no free work process...jobs r going to Release state............solution I had already given u.......
    Check the PID refeclting in lock waits in sm66 wether they are progressing or nt ..
    If they are not progressing what action needs to be taken?
    Actually...........suppose one Dialogue job is running............it may change..........ie  that Dialogue job will go to Stop status.........some other job will start............anyways if it does'nt progress............u can cacel the JOb.........
    Check OS07 for DB if the idle time is less then 20% then its a problem ..
    If the Idle time is less than 20% what action needs to be taken?
    U can contact the Basis people.......
    Check sm21 and RFC connection with other source systems in sm59 ..
    What needs to be checked in SM21 and SM59 specifically? What parameter i need to check?
    In SM21.............if red status is there............check the Log beside that..............may be it will be some Terminal disconnected.......
    In SM59...........clck on test connection..........
    if the problem is occuring due to only one source system then check the performance of that system ..
    How to check the performance of these systems? Any tools available in R/3 system to check its performance?
    No.....in this way u can check performance for all the source system...........go to SM59............double click on the desired source system>> click on test connection......
    If All these are persisting then its a performance problem and check with basis team ..
    Is there any special settings which needs to be maintained to achieve better performance of Process chain loads?
    U can Improve the performance of process chain by Parallel Processing..........ie split the loads by giving selections...........and execute thenm in parallel..........
    Regards,
    Debjani........

  • Hyperion System 9.3.1 reports taking longer time for the very first time

    We are on Hyperion System 9.3.1. The Financial reports are taking longer time (like 2 to 3 minuter) for the very first time for each login. The subsequest reports are does work faster.
    The behaviour is same for the Production and Development environments.
    All the reporting services have given enough JVM heap size.
    FYI, Reporting and Workspace runngin on the same server. Workspace/Reporting are clusted in two servers. HFM app is running on different server. HFM web is on different server. Shared Services is also on running on different server.
    Any help would be greately appreciated.
    Thanks.

    The reason they run quicker the subsequent times, is because the data has already been cached in the system.
    You could try the usual tricks to speed the report up:
    - move items into POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order
    - remove excessive formatting
    - push report calculations back to the data source
    We have found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Hope this helps. If not maybe give us an idea of how the report is created to see if other changes could be made.

  • I am extracting the data from ECC To bw .but Data Loading taking long tim

    Hi All,
                     i am extracting the data from ECC To BI Syatem..but Data Loading Taking Long time. from last   6 hoursinfopackage is running.still it is showing yellow.Manually i made the red.and delete again i applied repeat of the last delta.but same proble is coming .in the status job is showing bckground job is not finished at source system.we requested to basis.basis people killed that job.again we schedule the chain also again same problem is coming.how can i solve this issue.
    Thanks ,
    chandu

    Hi,
    There are different places to track your job. Once your job is triggered in BW, you can track your load job where exactly it is taking more time and why. Follow below steps:
    1) After InfoPackage is triggered, then take the request number and go to source system to check your extraction job status.
    You can get the job status by taking the request number from BW and go to transaction SM37 in ECC. Then give the request number with begining '' and ending ''.  Also give '*' to user name.
    Job name:  REQ_XXXXXX
    User Name: *
    Check the job status whether job is completed or cancelled or short dump. If the job is still running check in SM66 whether you can see any process. If not accordingly you got to check in ST22 or SM21 in ECC. If the job is complete, then the same in BW side now.
    2) Check the data arrived in PSA, if not check whether Transfer routines or start routines are having bad SQL or code. Similarly in update rules.
    3) Once it is through in Source system (ECC), Transfer rules , Update Rules, then the next task is updating the data might some time take more time which might be based on some parameters ( Number of parallel process to update database ). Check whether updating the database is taking more time and may be you got to check with the DBA guy also.
    At all the times you should see minimum of atleast once process running all the time in SM66 till the time your job gets complete. If not you will see a log in ST22.
    Let me know if you still have questions.
    Assigning points is the only way of saying thanks in SDN.
    Thanks,
    Kumar.

  • The ODS activation is taking long time

    Hi,
    We are on SAP NetWeaver BI 701 (Support Package 5).
    We create a Z ODS, it will contain a lot of data (180.000.000 month-end) and we want to generate specific reports about it.
    The activation is taking long time, I assume is because we checked the flag "SIDs Generation upon Activation". I am confused about this check. I really need it? is this check the only problem.
    Thanks for you help.
    Victoria

    Hi Victoria:
       If your Z DSO is used only for staging purposes (you don't have queries based on this DSO and you send the data to another DSO or to an InfoCube) then you don't need to check the "SIDs Generation Upon Activation" box.
    Even more, to achieve better performance during data loads in this scenario, you might consider using a Write Optimized DSO instead of a Standard DSO, but if you decide to take this alternative don't forget to select the "Do Not check Uniqueness of Data" box if you need to write several records with the same Semantic Key.
    Regards,
    Francisco Milán.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • HT4759 Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why

    Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why I need to deactivate the space service and take my money back .. Thanks

    The "issues" you've raised are nothing to do with the iCloud service.
    No service that uploads data allows you to disconnect the device you are uploading from while uploading data. Doing so would prevent the upload from completing. It is a basic requirement for any uploading service that you remain connected to it for uploading to be possible.
    The time it takes to upload data to iCloud is entirely dependent on how fast your Internet connection is, and how much data you are uploading. Both of these things are completely out of Apple's control. Whichever upload service you use will be affected by the speed of your Internet connection.

  • Cisco WS-C6513 taking long time to save the configuration

    HI,
    cisco WS-C6513 taking long time to save the configuration.
    Any ideas?
    Thank you 

    Hello,
    do you have correct dial plan ? It very depends on the country you live and you have your VoIP operator.
    Try to finish your dialing with # - this may speed it up.

  • VA05 & VA05N - Taking Long time to Give the Output.

    Dear All,
    VA05 & VA05N - Taking Long time to Give the Output for
                              Single date & Single Sales Office
    if I create Z-Program (VBAK) also taking Long time for Single date & Single Sales Office.
    Please Give some idea to Optimization the VA05 & VA05N.
    Please Give your Valuable solution.
    Thanks,
    Durai.V

    Dear Lakshmipathi,
       In my Previous client (ECC 5.0) VA05N Executing very fast for one month date for all sales office.
    They running SAP around 3 years, there data also huge but its giving fast output.
      But my current client ECC 5.0, here running SAP around 2.5 years, But here taking Long time to give
    the output for singe date & one Sales Office.
      But Billing details report VF05N Executing very fast.
    Thanks,
    Durai.V

  • Crystal report from MSC taking long time for the first time

    Hello Friends,
    I have a report designed in Crystal Reports to display the BP details. It is giving some performance issues. In the analysis it is found that it is taking long time immediately after i restart the machine. From the next time it is not that worse.
    If anybody is facing the same performance issue while using the crystal reports, kindly let me know the reasons for this. I would like to know the ways to increse the performance.
    Thanks for all your support.
    Best regards,
    Swarna Seeta

    Hello Swarna
    Reporting is quiet a heavy component which interacts with many other dlls.
    What happens when you say generate report from the mobile application for the first time, UI Framework knows that it is a reporting call and it will instantiate Reporting Manager. Reporting Manager is the single point of contact for the entire reporting functionality. During the initialization of reporting manager,
    it needs to instantiate all other working components like Error Hanlder, Data handler, Resoure component and crystal reports etc..Resource component is a COM component and so there will be lot of Marshallign/Unmanrshalling from Dotnet calls. Since all these sub components are singleton classes, initialized only once for the first time and reuse the same instance in the subsequent attempts.
    Hope I had made it bit clear
    Best Regards
    Shankar

Maybe you are looking for

  • How can I transfer voice memos from my iPhone 4 without syncing and having all music/video/audiobook content deleted from my phone?

    I would like to be able to transfer several very long voice memos from my iPhone 4 to my PC.  They are too long to email to myself. I manage music manually, so the phone doesn't sync automatically.  I have also not checked off "include voice memos" i

  • Error while loading data into ODS

    Hi we have BI for NW04S and CRM4.0. While loading data for a CRM ODS , I have received a error message which I never seen before. Yesterdaya data load was smooth and I only see issue while loading data today The error message in the monitor is as fol

  • Is there an air print app?

    I'm trying to print from my iPad to my printer. The printer is in HP 6300 . This printer does not automatically air print is there an app that can do so?

  • Enhancing a componnet...

    In the SAP help the instructions below are what SAP give to create an enhancement. I need to be very careful when creating enhancements, having just deleted a whole component when the wizard was used to remove an enhancement. So, I tend to double-che

  • Eliminate duplicate names when syncing with a car address book

    have i phone 4 and car with bluetooth; car syncs address books through bluetooth; iphone contacts are in smart groups so a name may appear in more than one group; resuts in duplicates in car phone directory; how eliminate