Help - Database Growth and loading best practice

Dear all,
I'm using NW 7.1
my data are consuming lots of space. I notice my data are grow at 20Gb per 2 month.
I wonder why the growth is so fast.
I using process chain to load my data and I always delete my data before full update my cube.
I kept only 1 week worth of data in my psa.
I do upload all my master data daily. ( full and delta upload )
Please advice
Regards,
-Dedys

Hi,
I have open the table space and here things that show at the table spaces
PSAPSR3     140.000,00     10.018,13     93
PSAPSR3700     21.600,00     1.421,81             93
PSAPSR3DB     5.000,00       1.639,81             67
PSAPSR3USR     20,00             19,31             3
PSAPTEMP     1.000,00       998,00             0
PSAPUNDO     9.020,00       8.624,94             4
SYSAUX             320,00             55,25             83
SYSTEM             560,00             7,50                     99
the one that always full is the PSAPSR3
what kind of data that is in the PSAPSR3 category ?
please advice

Similar Messages

  • General Oracle Database Performance trouble solving best practice Steps

    We use  Oracle 11g DB on Windows2008R2 as web application backend DB.
    We have peformance trouble in that DB.
    I would like to know General Oracle Database Performance trouble solving best practice Steps.
    Is there any good general best practice document for performace trouble solving in the internet ?

    @Girish Sharma:  I disagree with this. Many people say things like your phrase "..first identify the root cause and then move forward" but that is not the first step. Any such technique is nothing more than looking at some report, finding a number that you don't like, and attempting to "fix" it. Some people use that supposedly funny term "compulsive tuning disorder" (first used by Gaja Krishna Vaidyanatha) to describe this approach (also advocated in this topic by @Supriyo Dey). The first step must be to determine what the problem is. Until you know that, all those reports you mentioned (which, remember, require EE plus pack licences) are useless.
    @teradata0802, your best practice starts by finding the problem. Is it, for example, that the overnight batch jobs don't finish until lunchtime? A screen takes 10 seconds to refresh, and your target is one second? A report takes half an hour, but you need to run it every five minutes? Determine what business function is causing your client to lose money because it is too slow. Then investigate what it is doing, how, and why. You have to begin by focussing on the problem, not by running database-wide reports..

  • HTML and CSS Best Practices for Eloqua?

    Hello Topliners Community,
    My name is Ben and I am a Web Designer. I am currently looking for any guidance on HTML and CSS best practices when working with Eloqua. I am interested in the best practices for e-mail and landing pages.
    Thank you,
    Ben

    Personally I like to upload my custom created html/css into Eloqua instead of using the WYSIWYG.
    But if you must then right clicking on text boxes and click edit source is the way to go.
    There was a good discussion on editing your forms with CSS:
    Energize Your Eloqua10 Forms with CSS
    created by Ryan Wheler on Nov 2, 2012 8:44 AM, last modified by Greg Stotler on Sep 19, 2013 2:00 PM
    Version 2
    CSS can be used to heavily customize the layout of forms in Eloqua10.  In this article we will provide sample cover some common formatting use cases on Eloqua10 Landing Pages.  Further details about uses of CSS in Eloqua10 form templates can be found here: EE12 - Do It - Eloqua - Energize E10 Forms
    Eloqua10 Forms HTML Structure
    Below is an outline of the structure of the HTML generated by Eloqua when a form is added to a landing page.  By targeting the HTML classes highlighted below, we can control the layout of any form on your landing page.
      For the rest of page: http://topliners.eloqua.com/docs/DOC-3015

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Bean load best practice

    I am not new to Java, but up until now have been a programmer. I am now getting more into design and architecture and have a question about best practice. This question arises from a UML class I was taking. But in the class we stayed within the UML and did not get into implementation.
    My Question
    When creating classes and designing how they interact, what is the best practice for implementing associative relationships. For example, if I were modeling a Barn that contained Animals, I would create a Barn bean and an Animal bean. Since the Barn contained Animals I could create the code like this:
    public class Barn {
    String color;
    Collection animals;
    void setColor(String newcolor);
    String getColor( );
    void setAnimals(Collection newanimals);
    Collection getAnimals( );
    public class Animals{
    String name;
    void setName(String newname);
    String getName( );
    }The Collection within the Barn bean would be made up of Animal beans.
    This seems fairly straight forward. However, what if I loaded the bean from a database? When building the bean, do I also find all animals and build the Animal beans and create the Collection to store within the Barn object?
    Or
    Do I omit the animal Collection from my Barn bean and only populate the Collection at runtime, when someone calls the getAnimals method?
    I am confident that the latter is the better design for performance and synchonization issues. But I wanted to get other opinions.
    Do I need to read up more on object design?
    Thanks,
    Lonnie

    And lazy initialization. Basically, unless the data is needed. Don't load it.

  • Data Migration and Consolidation Best Practices?

    Hi guys
    Do you know what the best practice for data migration to FCSvr is? We’re trying to consolidate all media on various firewire/internal drives to a centralised RAID directly attached to a dedicated server. We’ve found that dragging and dropping a FCP project file uploads its associated media. The problem is that if there are several versions or separate projects linking to the same media, associated media is re-uploaded everytime! This results in that media getting duplicated several times. It appears that the issue is due to FCSvr creating a subfolder for every project file being uploaded which contains all the project’s media.
    This behaviour is not consistent when caching assets, checking out a project file, making changes and checking it back in. FCSvr is quite happy for a project file to link to media existing at the root level of the media device.
    We are of course running the latest version of everything. Hope you can help as we’re pulling our hair out here!
    Regards
    Gavin

    Hi,
    Do you really need an ETL Tool for these loading processes. Have you considered doing it in sql/plsql. If the performance of the application is one of the main priority I would definitely consider doing it in sql/plsql.
    Because of the huge amount of data and because your source and target systems are Oracle DBs I wouldn't recommend you to use Informatica.
    Also because source and target are Oracle DBs and it should be near real time you should have a look at Oracle Streams.
    Regards
    Maurice

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Loader Best Practice Question

    The documentation is not very clear on what to do with
    regards to releasing memory etc.
    Typically, I'm loading an image and assigning the image to an
    <mx:image>. Do I then need to dispose/unload the loader? If
    someone would be kind enough to either explain this or provide a
    good code example (that implement best practice) I'd appreciate it.
    Thanks.

    There has been some consternation about this. Check this
    thread for instance:
    http://www.adobe.com/cfusion/webforums/forum/messageview.cfm?forumid=60&catid=587&threadid =1179158

  • Configuring AD Sites and Services best practice for multiple office site ?

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    I'd like to know more about the number and the direction of the connection between Domain Controllers in one site to the Data Center and vice versa.
    Thanks.
    /* Server Support Specialist */

    Hi People,
    Can anyone here please suggest me or share the link of what is the best practice in configuring the AD Sites and Service for single AD domain with multiple office sites ?
    This series can be useful:
    Active Directory Structure Guidelines – Part 1
    Mahdi Tehrani   |  
      |  
    www.mahditehrani.ir
    Please click on Propose As Answer or to mark this post as
    and helpful for other people.
    This posting is provided AS-IS with no warranties, and confers no rights.
    How to query members of 'Local Administrators' group in all computers?

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

  • Help Please!!  SAP Best Practices on creating Projects in NDS...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications. We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects. For example, what is and when do you define Tracks, Software Components, Development Components, etc. All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see). We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance. This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi,
    Following URls gives you info about how to install JDI, what are DC's,SC's,how to define Track,how to manage versioning etc..
    You need to install the JDI separately. Go to http://service.sap.com/patches -> Entry by Application Group -> SAP Net Weaver -> SAP NETWEAVER -> SAP NETWEAVER 04 -> NWDI -> JDI 6.40 -> #OS independent
    https://www.sdn.sap.com/sdn/downloaditem.sdn?res=/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/sap enterprise portal 6.0 sp4 netweaver stack 4 developer sneak preview download.abst
    https://media.sdn.sap.com/public/eclasses/nwbcil/Java_Development_Infrastructures_files/Default.htm#nopreload=1
    http://help.sap.com/saphelp_nw04/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/content.htm.
    http://127.0.0.1:4180/help/index.jsp?topic=/com.sap.devmanual.doc.user/16/6c40450311774a8bb16f73e450f634/frameset.htm
    http://help.sap.com/saphelp_erp2004/helpdata/en/38/33eb9c3e1fe2409a9eba8246b933ca/content.htm
    http://help.sap.com/saphelp_erp2004/helpdata/en/78/52f0d3b3c8d1409d99a2148b2be596/frameset.htm
    http://help.sap.com/saphelp_erp2004/helpdata/en/78/52f0d3b3c8d1409d99a2148b2be596/frameset.htm   (deleting workspace folders)
    http://help.sap.com/saphelp_erp2004/helpdata/en/26/4b0d0930034b73aa3332ddc8c75e9d/content.htm
    http://www.sap-press.de/download/dateien/817/sappress_java_programming.pdf (Chapter on JDI)
    Hope this info helped you!
    Regards,
    RK

  • Flat File load best practice

    Hi,
    I'm looking for a Flat File best practice for data loading.
    The need is to load a flat fle data into BI 7. The flat file structure has been standardized, but contains 4 slightly different flavors of data. Thus, some fields may be empty while others are mandatory. The idea is to have separate cubes at the end of the data flow.
    Onto the loading of said file:
    Is it best to load all data flavors into 1 PSA and then separate into 4 specific DSOs based on data type?
    Or should data be separated into separate file loads as early as PSA? So, have 4 DSources/PSAs and have separate flows from there-on up to cube?
    I guess pros/cons may come down to where the maintenance falls: separate files vs separate PSA/DSOs...??
    Appreciate any suggestions/advice.
    Thanks,
    Gregg

    I'm not sure if there is any best practise for this scenario (Or may be there is one). As this is more data related to a specific customer needs. But if I were you, I would handle one file into PSA and source the data according to its respective ODS. As that would give me more flexibility within BI to manipulate the data as needed without having to involve business for 4 different files (chances are that they will get them wrong  - splitting the files). So in case of any issue, your trouble shooting would start from PSA rather than going thru the file (very painful and frustating) to see which records in the file screwed up the report. I'm more comfortable handling BI objects rather than data files - coz you know where exactly you have look.

  • Oil and gas best practices - need information

    Colleagues!
    Help me to find some information about sap oil and gas best bractices.
    I am interested in .ppt presentation, word documents that in some way describe best practices for oil and gas industry.
    Thanks in advance for your help.

    Hi,
    Can you please check this link http://www.sap.com/industries/oil-gas/index.epx.
    Hope this helps you.
    Rgds
    Manish

  • HTTP Web Response and Request Best Practices to Avoid Common Errors

    I am currently investigating an issue with our web authentication server not working for a small subset of users. My investigating led to attempting to figure out the best practices for web responses.
    The code below allows me to get a HTTP status code for a website.
    string statusCode;
    CookieContainer cookieContainer = new CookieContainer();
    HttpWebRequest myHttpWebRequest = (HttpWebRequest) WebRequest.Create(url);
    // Allow auto-redirect
    myHttpWebRequest.AllowAutoRedirect = true;
    // make sure you have a cookie container setup.
    // Probably not saving cookies and getting caught
    // in a redirect loop.
    myHttpWebRequest.CookieContainer = cookieContainer;
    WebResponse webResponse = myHttpWebRequest.GetResponse();
    statusCode = "Status Code: " + (int)((HttpWebResponse)webResponse).StatusCode + ", " + ((HttpWebResponse)webResponse).StatusDescription;
    Through out my investigation, I encountered some error status codes, for example, the "Too
    many automatic redirections were attempted" error. I fixed this by having a Cookie Container, as you can see above.
    My question is - what are the best practices for requesting and responding to HTTP requests?
    I know my hypothesis that I'm missing crucial web request methods (such as implementing a cookie container) is correct, because that
    fixed the redirect error. 
    I suspect my customers are having the same issue as a result of using our software to authenticate with our server/URL. I would like to avoid as many web request issues as much as possible.
    Thank you.

    Hello faroskalin,
    This is issue is more reagrding ASP.NET, I suggest you asking it at
    http://forums.asp.net/
    There are ASP.NET experts who will help you better.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to load best practices data into CRM4.0 installation

    Hi,
      We have successfully installed CRM4.0 on a lab system and now would like to install the CRM best practice data into it.
      If I refer to the CRM BP help site http://help.sap.com/bp_crmv340/CRM_DE/index.htm,
    It looks like I need to install at least the following In order to run it properly.
    C73: CRM Essential Information 
    B01: CRM Generation 
    C71: CRM Connectivity 
    B09: CRM Replication 
    C10: CRM Master Data 
    B08: CRM Cross-Topic Functions
    I am not sure where to start and where to end. At the minimum level I need the CRM Sales to start with.
    Do we have just one installation CDs or a number of those, Also are those available in the download area of the service.sap.com?
    Appreciate the response.

    <b>Ofcourse</b> you need to install Best Practices Configuration, or do your own config.
    Simply installing CRM 4.0 from the distibutiond CD\DVD will get you a plain vanilla CRM system with no configuration and obviously no data.  The Best Practices guide you trhough the process of configuring CRM, and even has automated some tasks.  If you use some of the CATT processes of the Best Practices you can even populate data in your new system (BP data, or replace the input files with your own data)
    In 12 years of SAP consulting, I have NEVER come across a situation whereby you simply install SAP from the distribution media, and can start using it without ANY configuration.
    My advise is to work throught the base configuration modules first, either by importing the BP config/data or following the manual instruction to create the config/data yourself.  Next, look at what your usage of CRM is going to be, for example Internet Sales, Service Management, et cetera, and then install the config  for this/these modules.

Maybe you are looking for