Database Statistics to improve performance

I had skipped the creation of database statistics after the import stage during ECC installation . I have installed PI 7.1 and ECC 6.0 SR3 as MCOD installation.  My installation is on 64bit windows 2003 server, ORACLE 10 DB and unicode kernel.
Would it be okay if I did the database statistics creation , now that my installation is finished and both PI 7.1 and ECC 6.0 is working correctly ?
i would be calling it via command line
brconnect.exe -u / -c -o summary -f stats -o SAPSR4 -t all -p 8 -f nocasc
my schema for the ECC is SAPSR4.

yes, you can do this right now.
-p is at 8, so, just do this off hours or when nobody is using this as it may impair performance while this job runs.
good luck.

Similar Messages

  • How to optimize Database Calls to improve performance of an application

    Hi,
    I have a performance issue with my applications. It takes a lot of time to load, as it is making several calls to the database. And, moreover, the resultset has more than 2000 records that are returned. I need to know what is the better way to improve the performance
    1. What is the solution to optimize the database calls so that I can improve the performance of my application and also improve on the trun around time to load the web pages.
    2. Stored procedures are a good way to get the data from the result set iteratively. How can I implement this solution in Java?
    This is very important, and any help is greatly appreciated.
    Thanks in Advance,
    Sailatha

    latha_kaps wrote:
    I have a performance issue with my applications. It takes a lot of time to load, as it is making several calls to the database. And, moreover, the resultset has more than 2000 records that are returned. I need to know what is the better way to improve the performance
    1. What is the solution to optimize the database calls so that I can improve the performance of my application and also improve on the trun around time to load the web pages.
    2. Stored procedures are a good way to get the data from the result set iteratively. How can I implement this solution in Java?
    This is very important, and any help is greatly appreciated.1. 2000 records inside a resultset are not a big number.
    2. Which RDBMS you use?
    Concerning the answer to 2. you have different possibilities. The best thing is always to handle as many transactions as possible inside the database. Therefore a stored procedure is the best approach imho.
    Below there is an example for an Oracle RDBMS.
    Assumption #1 you have created an object (demo_obj) in your Oracle database:
    create type demo_obj as object( val1 number, val2 number, val3 number);
    create type demo_array as table of demo_obj;
    /Assumption #2 you've created a stored function to get the values of the array in your database:
    create or replace function f_demo ( p_num number )
    return demo_array
    as
        l_array demo_array := demo_array();
    begin
        select demo_obj(round(dbms_random.value(1,2000)),round(dbms_random.value(2000,3000)),round(dbms_random.value(3000,4000)))
        bulk collect into l_array
          from all_objects
         where rownum <= p_num;
        return l_array;
    end;
    /For getting the data out of database use the following Java program (please watch the comments):
    import java.sql.*;
    import java.io.*;
    import oracle.sql.*;
    import oracle.jdbc.*;
    public class VarrayDemo {
         public static void main(String args[]) throws IOException, SQLException {
              DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
              Connection conn = DriverManager.getConnection(
                        "jdbc:oracle:oci:@TNS_ENTRY_OF_YOUR_DB", "scott", "tiger"); // I am using OCI driver here, but one can use thin driver as well
              conn.setAutoCommit(false);
              Integer numRows = new Integer(args[0]); // variable to accept the number of rows to return (passed at runtime)
              Object attributes[] = new Object[3]; // "attributes" of the "demo_obj" in the database
              // the object demo_obj in the db has 3 fields, all numeric
              // create an array of objects which has 3 attributes
              // we are building a template of that db object
              // the values i pass below are just generic numbers, 1,2,3 mean nothing really
              attributes[0] = new Integer(1);
              attributes[1] = new Integer(2);
              attributes[2] = new Integer(3);
              // this will represent the data type DEMO_OBJ in the database
              Object demo_obj[] = new Object[1];
              // make the connection between oracle <-> jdbc type
              demo_obj[0] = new oracle.sql.STRUCT(new oracle.sql.StructDescriptor(
                        "DEMO_OBJ", conn), conn, attributes);
              // the function returns an array (collection) of the demo_obj
              // make the connection between that array(demo_array) and a jdbc array
              oracle.sql.ARRAY demo_array = new oracle.sql.ARRAY(
                        new oracle.sql.ArrayDescriptor("DEMO_ARRAY", conn), conn,
                        demo_obj);
              // call the plsql function
              OracleCallableStatement cs =
                   (OracleCallableStatement) conn.prepareCall("BEGIN ? := F_DEMO(?);END;");
              // bind variables
              cs.registerOutParameter(1, OracleTypes.ARRAY, "DEMO_ARRAY");
              cs.setInt(2, numRows.intValue());
              cs.execute();
              // get the results of the oracle array into a local jdbc array
              oracle.sql.ARRAY results = (oracle.sql.ARRAY) cs.getArray(1);
              // flip it into a result set
              ResultSet rs = results.getResultSet();
              // process the result set
              while (rs.next()) {
                   // since it's an array of objects, get and display the value of the underlying object
                   oracle.sql.STRUCT obj = (STRUCT) rs.getObject(2);
                   Object vals[] = obj.getAttributes();
                   System.out.println(vals[0] + " " + vals[1] + " " + vals[2]);
              // cleanup
              cs.close();
              conn.close();
    }For selecting 20.000 records it takes only a few seconds.
    Hth

  • Effect of Updating Database statistics to program's performance

    Hi,
    What is the Effect of Updating Database statistics to program's performance?
    Will the program run's faster after the update/
    Thanks a lot!

    Hi,
    >
    freishz wrote:
    > What is the Effect of Updating Database statistics to program's performance?
    update the statistics the optizer uses for making an execution plan.
    The statistics contains values like:
    table statistics:
    - nr of rows of the table
    - nr ob blocks / pages of the table
    - nr of empty blocks / pages
    index statistics:
    - nr of distinct keys in the index
    - nr of leaf pages of the index
    - clustering factor of the index
    - height (levels) of the index
    column statistics:
    - nr of distinct values for a (indexed) column
    - low value and high value for a (indexed) column
    >
    freishz wrote:
    > Will the program run's faster after the update/
    with updated statistics the runtime of the program might be
    - higher
    - lower
    - the same
    it depends on the differences of the generated execution plan used after the update in
    comparison with the execution plan used before the update.
    In general the runtime will be the same or lower. But i have seen cases where updating
    statistics lead to higher runtimes, but these are very rare.
    Why are you asking this question? do you have slow sql with a inefficent execution plan?
    statistics is one of many things that need to be analyzed...
    Kind regards,
    Hermann

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • Constructing Database Statistics

    When and where should I use Database Statistics?  I'm trying to improve overall system performance.  Are there any cons to running this process?
    Thank You All.

    Yes BRCONNECT will collect DB stas on the tables as it determines appropriate based on monitoring criteria which actaull allows for DB stats to be refreshed based on some % of rows added to the table (default is 50%) since the last time stats were collected.
    The Traffic light status of the performance tab simply checks to see if DB stats are missing on a fact or dimension table, setting the status to RED.  If the E fact table stats are more than 30 days old or 4 or more dimension table's stat are more than 30 days old, the status is set to yellow.  So yellow could mean your stats are really pretty accurate, or disastrously out of date.
    I'd bring your DBA into the discussion so determeine the best course of action.
    You can set the stats collection job in the performance tab to run automatically based on loads/deltas, or you could schedule the job to occur perhaps on weekends.  Better yet, setup a process chain to do it.  The main issue is probably whether you have time in your daily load window to refresh the stats.
    If you set the option to collect stats after a load/delta, stats are only collected on teh F fact table and the dimension tables.  If you run the job yourself or schedule it, it will collect stats on both the F and E fact tables, the dimension tables, and all related master data tables.

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • Improving Performance

    Hi Experts,
    How can we improve the performance of the select with out creating an secondary index?
    In my select query am not using  primary fields in where condation.
    so i want to know that how can we improve the performance .
    one more thing is that if we r creating secondary index what are the disadvantages of that?
    Thanks & Regards,
    Amit.

    If you select from a table without using an appropriate index or key, then the database will perform a table scan to get the required data.  If you accept that this will be slow but must be used, then the key to improving performance of the program is to minimise the number of times it does the scan of the table.
    Often the way to do this is not what would normally be counted as good programming.
    For example, if you SELECT inside a loop or SELECT using FOR ALL ENTRIES, the system can end up doing the table scan a lot of times because the SQL is broken up into lots of individual/small selects passed to the database one after the other.  So it may be quicker to SELECT from the table into an internal table without specifying any WHERE conditions, and then delete the rows from the internal table that are not wanted.  This way you do only a single table scan on the database to get all records.  Of course, this uses a lot of memory - which is often the trade off.  If you have a partial key and are then selecting based on non idexed fields, you can get all records matching the partial key and then throw away those where the remaining fields dont meet requirements.
    Andrew

  • How to eliminate joins to improve performance

    i have a query:
    from D236OT00.ASN1_COMP_NB CP LEFT OUTER JOIN D236OT00.NB01_COMP_DEP_NB CD
         on(CP.SRVC_LOC_ID = CD.CHLD_SRVC_LOC_ID
              and CP.ORD_ITEM_ID = CD.CHLD_ORD_ITEM_ID
              and CP.PRMRY_COMP_CD = CD.CHLD_PRMRY_COMP_CD
              and CP.SECNDRY_COMP_CD = CD.CHLD_SCN_COMP_CD
         and CP.ORD_ITEM_SEQ = CD.CHLD_ORD_ITEM_SEQ
              and CD.DEP_TYPE_CD = 'HI'
              and CD.RECORD_EFF_END_DT = '9999-12-31'
              and CD.EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.ASN2_RESOURCE_NB RS
    on(CP.ORD_ITEM_ID = RS.ORD_ITEM_ID
              and CP.ORD_ITEM_SEQ = RS.ORD_ITEM_SEQ
              and CP.PRMRY_COMP_CD = RS.PRMRY_COMP_CD
              and CP.SECNDRY_COMP_CD = RS.SECNDRY_COMP_CD
              and RS.RECORD_EFF_END_DT = '9999-12-31'
              and RS.RESR_EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.CSTI_CUST_INFO_MP CS
    on(RS.OWNER_ACCT_ID = CS.OWNER_ACCT_ID
              and RS.OWNER_SRVC_LOC_ID = CS.OWNER_SRVC_LOC_ID
              and RS.RESR_ID = CS.RESR_ID and RS.RESR_GRP_TYP = CS.RESR_GRP_TYP
              and RS.RESR_TYPE = CS.RESR_TYPE and CS.START_NBR = ''
              and CS.EFF_END_DT = '9999-12-31' and CS.RATE_END_EFF_DATE = '9999-12-31')
    where CP.ORD_ITEM_ID = ? and CP.ORD_ITEM_SEQ = ? and CP.PRMRY_COMP_CD = ? and
    CP.RECORD_EFF_END_DT = '9999-12-31' and CP.EFF_END_DT > CURRENT DATE
    it has quite a few joins... any way to eliminate these joins to improve performance? Kindly help as this is urgent

    from D236OT00.ASN1_COMP_NB CP LEFT OUTER JOIN D236OT00.NB01_COMP_DEP_NB CD
    on(CP.SRVC_LOC_ID = CD.CHLD_SRVC_LOC_ID
    and CP.ORD_ITEM_ID = CD.CHLD_ORD_ITEM_ID
    and CP.PRMRY_COMP_CD = CD.CHLD_PRMRY_COMP_CD
    and CP.SECNDRY_COMP_CD = CD.CHLD_SCN_COMP_CD
    and CP.ORD_ITEM_SEQ = CD.CHLD_ORD_ITEM_SEQ
    and CD.DEP_TYPE_CD = 'HI'
    and CD.RECORD_EFF_END_DT = '9999-12-31'
    and CD.EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.ASN2_RESOURCE_NB RS
    on(CP.ORD_ITEM_ID = RS.ORD_ITEM_ID
    and CP.ORD_ITEM_SEQ = RS.ORD_ITEM_SEQ
    and CP.PRMRY_COMP_CD = RS.PRMRY_COMP_CD
    and CP.SECNDRY_COMP_CD = RS.SECNDRY_COMP_CD
    and RS.RECORD_EFF_END_DT = '9999-12-31'
    and RS.RESR_EFF_END_DT > CURRENT DATE)
    LEFT OUTER JOIN D236OT00.CSTI_CUST_INFO_MP CS
    on(RS.OWNER_ACCT_ID = CS.OWNER_ACCT_ID
    and RS.OWNER_SRVC_LOC_ID = CS.OWNER_SRVC_LOC_ID
    and RS.RESR_ID = CS.RESR_ID and RS.RESR_GRP_TYP = CS.RESR_GRP_TYP
    and RS.RESR_TYPE = CS.RESR_TYPE and CS.START_NBR = ''
    and CS.EFF_END_DT = '9999-12-31' and CS.RATE_END_EFF_DATE = '9999-12-31')
    where CP.ORD_ITEM_ID = ? and CP.ORD_ITEM_SEQ = ? and CP.PRMRY_COMP_CD = ? and
    CP.RECORD_EFF_END_DT = '9999-12-31' and CP.EFF_END_DT > CURRENT DATEYou have not used tags.
    it has quite a few joins... any way to eliminate these joins to improve performance? Kindly help as this is urgentBy saying urgent you have lost 99% of points for the volunteer to answer. This is considered as rude.
    Your query is incomplete. There is no database version, no os version.
    Where is your research? How do you came to that conculsion? Where is the explain plan?
    I think you have to repost along with the complete details.
    Thank you.

  • Times ten to improve performance for search results in Oracle eBS

    Hi ,
    We have various search scenarios in our ERP implementaion using Oracle Apps eBS, for example searching for an item . Oracle apps does provide item search but performance is not great. We have about 30 million items and hence to improve the performance of the search thought Times ten may help.
    Can anyone please clarify if Times ten can be used to improve performance on the eBS database , if yes how.

    Vikash,
    We were thinking along the same lines (using TimesTen for massive item search in e-Business Suite). In our case massive Item / parametric search leveraging the Product Information Management application. We were thinking about setting up a POC on a Linux Server with a Vision Instance. We should compare notes?
    SParker

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • Improving performance while adding groups

    Hello,
    I've been monitoring my crystal reports from a week or so and the report performance is going for a toss. I would like to narrate this in little detail. I have created 3 groups to select dynamic parameters and each group has a formula for itself. In my parameters I have added one parameter with 7 entities (which is hard coded), now a user can select any 3 entity out of those seven when initiallly refreshing the document, each of the parameter entity is bundeled in a conditional formula (mentioned under formula fields) for each entity. The user may select any entity and may get the respective data for that entity.
    For all this i have created 3 groups and same formula is pasted under all the 3 groups. I have then made the formula group to be selected under Group expert. The report works fine and yields me correct data. However, during the grouping of the formula's crystal selects all the database tables from the database field as these tables are mentioned under the group formula. Agreed all fine.
    But when I run the report the "Show SQL query" selects all the database tables under Select clause which should not be the case. Due to this even if i have selected an entity which has got only 48 to 50 records, crystal tends to select all the 16,56,053 records from the database fields which is hampering the crystal performance big time. When I run the same query in SQL it retrives the data in just 8 seconds but as crystal selecting all the records gives me data after 90 seconds which is frustrating for the user.
    Please suggest me a workaround for this. Please help.
    Thank you.

    Hi,
    I suspect the problem isn't necessarily just your grouping but with your Record Selection Formula as well.  If you do not see a complete Where clause is because your Record Selection Formula is too complicated for Crystal to translate to SQL. 
    The same would be said for your grouping.  There are two suggestions I can offer: 
    1)  Instead of linking the tables in Crystal, use a SQL Command and generate your query in SQL directly.  You can use parameters and at the very least, get a working WHERE clause. 
    2)  Create a Stored Procedure or view that can use the logic you need to retrieve the records. 
    At the very least you want to be able to streamline the query to improve performance.  Grouping may not be possible but my guess it's more with the Selection formula than the grouping.
    Good luck,
    Brian

  • Collecting database statistics in 10g

    Hi,
    We are using Oracle database 10.2.0.4 on hp os . As we know in 10g AWR automatically collect stats after 1 hour . is there any need to collect database stats agin manualy by using dbms_stats ...?
    is there any difference betweencollecting stats by AWR and dbms_stats ... ?
    "execute sys.dbms_stats.gather_system_stats('Start') ;
    execute sys.dbms_stats.gather_schema_stats( ownname=>'pc01', cascade=>FALSE, degree=>dbms_stats.default_degree, estimate_percent=>100);
    execute dbms_stats.delete_table_stats( ownname=>'pc01', tabname=>'statcol');
    execute sys.dbms_stats.gather_system_stats('Stop');"
    any idea ...?

    Hello...
    Thanks a lot ...
    Some of our production systems ...those are running on oracle10g ....they are collecting database stats once in a month...manualy...
    by using dbms.stats... to improve performance of system ....
    so is there any need to collect stats manauly.....?
    As per my understanding ...no need to collect it manualy ...because AWR is doing this ...
    am i right ...?

  • Improving performance of query with View

    Hi ,
    I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
    in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
    Thanks,
    Vishal.

    Do not try to solve problems that you have not yet confirmed to exist.  There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly. 
    As a suggestion, write a working query using a duplicate of the table in the current database.  Once it is working, then worry about performance.  Once that is working as efficiently as it can , change the query to use the "remote" table rather
    than the duplicate. Then determine if you have an issue.  If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address.  In that case, perhaps you need to change your perspective
    and approach to accomplishing your goal. 

  • Improve Performance with QaaWS with multiple RefreshButtons??

    HI,
    I read, that a connection opens maximal 2 QaaWS. I want to improve Performance.
    Currently I tried to refresh 6 connections with one Button. Would it improve performance if I split this 1 Button with 6 Connections to 3 buttons each 2 connections ?
    Thanks,
    BWBW

    Hi
    HTTP 1.1 limits the number of concurrent HTTP requests to maximum two, so your dashboard will actually be able to send & receive maximum 2 request simultaneously, third will stand-by till one of those first two is handled.
    QaaWS performance is mostly affected by database performance, so if you plan to move to LO to improve performance, I'd recommend you use LO from WebI parts, as if you use LO to consume a universe query, you will experience similar performance limitations.
    If you actually want to consume WebI report parts, and need report filters, you can also consider XI 3.1 SP2 BI Services, where performance is better than QaaWS, and interactions are also easier to implement.
    Hope that helps,
    David.

Maybe you are looking for

  • How do I sign out of my account after logging in? There is no sign out button when signing into my apple account using safari

    Every time I sign into my apple ID account online I cannot log out by clicking a "sign out" button. This is quite strange. Closing the browser window doesn't automatically log me out either. I know with itunes there is a sign out or log out button bu

  • How to add a style name to a text using InDesign plugin

    Hi, I am developing a InDesign CS6 plugin to add a text to a text frame. I used the  textEditSuite->InsertText(WideString("String to Insert")) to insert a String to a text area. Now my requirement is to specify a style to the String while adding it u

  • Custom Infotype Problen in ECC 6

    We have created a custom infoype(PA9000) in 4.6c, it was working fine, but during upgrade to ECC 6 we have a problem in the Programs using the custom infotype.. getting the error 'INCLUDE report "%_hr9000' not found .we tried to create the Infotype a

  • Where to post an upload variable

    I use a cart on my website and in order for it to process orders I need to post this variable on the form for the check out cart. does it go on the credit card processor form where user inputs the credit card info? <input type="checkbox" title="Manua

  • Transport Error on Data Mart Application Component

    Hi All, I have to transportthe export data source for my DSOs. I did use transport collection to make sure all the relevent objects are collected. When I transport the export data sources everything is going fine, but the DM (Application Component fo