Database/data mart size and filegrowth

We have an OLTP database with an initial size of 1GB and a log size of 500MB with filegrowth of 16MB for both files.
Also, the data and log files are stored on different disks for dedicated r/w transactions
Is their a best practice for a log size (i.e. half or quarter the DB size)?
We are also creating a data mart.
Should the data mart be on a different disk also?
Is their a best practice for the data and log file sizes of a data mart?

Is their a best practice for a log size (i.e. half or quarter the DB size)?
I don't know about data mart I am just talking about Log file size for OLTP database.  There is no specific recommendation it should be according to auto growth happening for log file.
16MB auto growth seems less to me for both data and log file. I dont know what would be exact value but would point you to article which would help you in setting correct value. Its big but worth reading.
This Article has couple of queries which would tell you correct value.
Please dont create multiple log files as there would be no performance gain
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP

Similar Messages

  • ORacle 10g - Display database data, edit it and write back to database

    Hi!
    I would like to find an application which allows me to read data from a database (object-relational database schema), display it visually, make it editable and offer me the opportunity to write changed data to the database.
    Does anyone have experience with Visio or some other tool?
    Timo

    Hi,
    thank you for your replies!
    I am not sure if the mentioned applications really are what I am looking for. It seems to me that they are developing tools rather than presentation and editing tools. I don't want to present or change the schema but the data of the database. The objects in my database are business processes and the user should be able to get all the information he/she desires about them, be able to edit them and also to create new processes. It's more like a GUI for end users.
    Timo

  • Data Mart Interface and data to BW

    Hi There,
    I am an SAP consultant with SD Functional background, i would like to know, how is the sales data or supply chain data is pulled into BW using the data mart interface.
    Please provide any Tcodes for the above as well as tcodes for generating reports specific to sales.
    Thanks!

    Hi,
    Could you see the follwing url??
      http://help.sap.com/saphelp_nw70/helpdata/EN/09/a7b339688d2453e10000000a114084/frameset.htm
    Regards,
    AleX

  • Get back the Data mart status in ODS and activate the delta update.

    I got a problem when deleting the requests in ODS.
    actually there is Cube(1st level. it gets loaded from an ODS(2nd level). this gets loaded from 3 ODS'S( 3rd level). we were willing to delete recents requests from all the data tardets and reload from PSA. but while delting in the request in ODS(2nd level), it has displayed a window, showing as follows.
    - the request 132185 already retrived by the data target BP4CLT612.
    -Delta update in BP4CLT612 must be deactivated before deleting the request.
    - Do you want to deactivate the delta update in data target BP4CLT612.
       I have clicked on execute changes in the window. it has removed the data mart status for all the request which i have not deleted.
    in the same it happened inthe 3 ODS's(3rd level).
    I got clear that if we load further data from source system. it will load all the records from starting.
    so to avoid this can any body help me how to reset the Data mart status and activate the delta update.

    Hi Satish,
    U have to make the requests RED in cube and back them out from cube,before u can go for request deletions from the base targets(from which cube gets data).
    Then u have to reset data mart status for the requests in your 'L2 ODS' before u can delete requests from ODS.
    Here I think u tried to delete without resetting data mart status which has upset the delta sequence.
    To correct this..
    To L2 ODS,do an init without data transfer from below 3 ODS's after removing init request from scheduler menu in init infopackage.
    Do similar from L2 ODS to Cube.
    then reconstruct the deleted request in ODS.It will not show the tick mark in ODS.Do delta load from ODS to Cube.
    see below thread..
    Urgentt !!! Help on reloading the data from the ODS to the Cube.
    cheers,
    Vishvesh

  • Missing Image Size and Position Data

    In AI CS5 the size and position of an image were displayed in the top toolbar when you selected the image.  This is missing in AI CS6, how can I get it back?

    Well I did discover that what I need is called "Transform" but it's already checked in the menu you suggested.  In fact EVERY box is already checked in that menu, and checking or unchecking them from that menu has absolutely no affect on anything anyway. 
    I found that I can expand one of the menus from the right side, and the "Transform" box is there, and that's fine.  But that data (object size and position) was very useful when displayed on the top toolbar in CS5. Any idea how to make it appear in the top toolbar?  It will un-dock (and is very difficult to re-dock)

  • How do find all database slog size and mdf file size ?

    hi experts,
    could you share query to find all databases log file size and mdf file (includes ndf files ) and total db size ? in MB and GB
    I have a task to kae the dbs size  around 300 dbs
    ========               ============     =============        = ===        =====
    DB_Name    Log_file_size           mdf_file_size         Total_db_size           MB              
    GB
    =========              ===========               ============       ============     
    Thanks,
    Vijay

    Use this ViJay
    set nocount on
    Declare @Counter int
    Declare @Sql nvarchar(1000)
    Declare @DB varchar(100)
    Declare @Status varchar(25)
    Declare @CaptureDate datetime
    Set @Status = ''
    Set @Counter = 1
    Set @CaptureDate = getdate()
    Create Table #Size
    SizeId int identity,
    Name varchar(100),
    Size int,
    FileName varchar(1000),
    FileSizeMB numeric(14,4),
    UsedSpaceMB numeric(14,4),
    UnusedSpaceMB numeric(14,4)
    Create Table #DB
    Dbid int identity,
    Name varchar(100)
    Create Table #Status
    (status sql_Variant)
    Insert Into #DB
    Select Name
    From Sys.Databases
    While @Counter <=(Select Max(dbid) From #Db)
    Begin
    Set @DB =
    Select Name
    From #Db
    Where @Counter = DbId
    Set @Sql = 'SELECT DATABASEPROPERTYEX('''+@DB+''', ''Status'')'
    Insert Into #Status
    Exec (@sql)
    Set @Status = (Select convert(varchar(25),status) From #Status)
    If (@Status)= 'ONLINE'
    Begin
    Set @Sql =
    'Use ['+@DB+']
    Insert Into #Size
    Select '''+@DB+''',size, FileName ,
    convert(numeric(10,2),round(size/128.,2)),
    convert(numeric(10,2),round(fileproperty( name,''SpaceUsed'')/128.,2)),
    convert(numeric(10,2),round((size-fileproperty( name,''SpaceUsed''))/128.,2))
    From sysfiles'
    Exec (@Sql)
    End
    Else
    Begin
    Set @SQL =
    'Insert Into #Size (Name, FileName)
    select '''+@DB+''','+''''+@Status+''''
    Exec(@SQL)
    End
    Delete From #Status
    Set @Counter = @Counter +1
    Continue
    End
    Select Name, Size, FileName, FileSizeMB, UsedSpaceMB, UnUsedSpaceMB,right(rtrim(filename),3) as type, @CaptureDate as Capturedate
    From #Size
    drop table #db
    drop table #status
    drop table #size
    set nocount off
    Andre Porter

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • Data marting

    Hi experts,
    can any one give  the idea on data mart interface and send the documents related to DMI to my mail id [email protected]
    Thanks in advance,
    kumar

    Hi Kumar,
    I hope this help.sap link wud be useful to u
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6110e07211d2acb80000e829fbfe/content.htm
    Regards,
    R.Ravi

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Listing File, path, sizes and dates

    Hi all
    Case: I have 6 machines on one network (some with 2 drives), another on another network (connected by vpn) and 3 external drives.
    These all have accumulated stuff, some repeated as I've upgraded and re-purposed machines but left directories behind as a safeguard.
    I need to rationalise the space and develop a more systematic approach to backup and archiving.
    My first step is do an inventory of what's where and my natural approach is work with a data base of file name, path, size, created date and last modified to allow me to do some maths on archiving to dvd.
    Using find as follows
    find [Start Somewhere Directory] -print > [workspace path]/TestOutput.txt
    gives me a nice list of file name I can parse out in a database
    But I don't get size and dates.
    Adding -ls produces that info but creates header lines for parent directories and adds permissions, node and other info I don't need.
    I got to here
    find [Start Somewhere Directory] -type df -exec ls -lshk {} \; -print > [workspace path]/TestOutput.txt
    but still has the hierarchical output rather than flat paths.
    Am I doing this all the hard way ? Is there a tool that returns just what I'm looking for ?
    Or what command will allow me to take just the relevant columns form ls to the print parameter ?
    Or can I extend find to add the size and date info to the output ?
    Kind Regards
    Eric

    Eric
    I just so happened to have done something similar before!
    It relies on mdls so isn't exactly speedy, but produces a full path, size, modification date, modification time, creation date and creation time as a comma separated list. mdls is not exactly predictable as to which order you get its output, so basically you have to try first without any editing.
    Anyway, here it is:
    sudo find ~/testfolder -type f \! -name ".*" -exec echo 'mdls -name kMDItemFSSize -name kMDItemFSCreationDate -name kMDItemFSContentChangeDate "{}" | tr "\n" "," | sed "s%^\(/.*\) .*ChangeDate = \(....-..-..\) \(.*\) .CreationDate.= \(....-..-..\) \(.*\) ...00,.FSSize.= \(.*\),$%\"\1\",\6,\2,\3,\4,\5%"' \; | shI'm sure it could be improved!
    You could also do it with AppleScript, since that can access the creation date easily.

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

  • Database size and Storage Metrics are mismatching

    Hi,
    When I look at the storage metrics for one of our SharePoint Web Application root site collection it shows the top sized site as 28.2% of parent and the size as 21178.5 MB. When we calculate the total size of the site collection from
    that it should 74994 MB. But when I see the size of the content database ( this is the only site collection in this content database) it shows 185661.5 MB as the size and 26852.5 MB as the space available. When I run the report (Disk Usage By Top Tables)
    on that database it brings the list with dbo.AllDocStreams table at the top (Data - 80333 MB) followed by dbo.AuditData (Data - 48305 MB) followed by dbo.AllUserData (Data - 8593 MB).
    3 years back we migrated this SharePoint Web Application from 2007 to 2010.
    I saw that the recycle bin has only 30 day worth data. 
    How can we reduce the size of the database according to what it shows in the storage metrics? Appreciate any help.
    Thanks.
    rani

    so... your database has:
    80gb in files (includes version history)
    50gb in audit data
    8.5gb of list data
    plus ~27gb of free space for growth between backups
    total of ~165gb (which is close enough to the 185661.5 that you listed)
    you need to split up the site collection, and then spread the site collections across multiple databases.
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • Database file sometimes gets to zero size and zero file flags

    I have a problem that on my system sometimes single database file gets to a state where it has zero size and zero file flags ( as if set chmod 0 file ).
    My database runs 24/7 and there are multiple agents running at the same time. My database files are backed up and removed from time to time to protect stored data. So I guess that this error could come up when two agents get to backup or recover the database file at the same time. Though, it is hard to find if it is caused by this problem, so I'd like to ask if there is anyone who stumbled on same problem - where one database ends up in state of 0 flags and 0 size.

    Hello, anyone has any tip about this issue?

  • Swapping and Database Buffer Cache size

    I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.

    Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
    However ... as the buffer cache grows, the time to determine 'which blocks
    need to be cleaned' increases. Therefore, at a certain point the benefit of a
    larger cache is offset by the time to keep it sync'd to the disk. After that point,
    increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
    A checkpoint performs the following three operations:
    1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
    It's the DBWR that writes all modified databaseblocks back to the datafiles.
    2. The latest SCN is written (updated) into the datafile header.
    3. The latest SCN is also written to the controlfiles.
    The following events trigger a checkpoint.
    1. Redo log switch
    2. LOG_CHECKPOINT_TIMEOUT has expired
    3. LOG_CHECKPOINT_INTERVAL has been reached
    4. DBA requires so (alter system checkpoint)

Maybe you are looking for

  • DME - Duplicate File

    Hi Guys While running the RFFOUS_T program, there were some memory issue in our production system and it got cancelled. In the spool ( thru SM37) I see data of 2 employees with their bank and amount information. When we fixed the memory issue and re-

  • Error : IDoc XML data record: In segment attribute occurred instead of SEGM

    hi friends i am doing the file to idoc scenario. in message mapping i had done the static test. but what ever the fields i mapped in the idoc it was not populated in the idoc. and i am getting the error as error :IDoc XML data record: In segment attr

  • Different privelege level for Active directory users

    Hi, We have integrated Acs 4.1se with windows active directory.now we need to give certain users full privige to some client devices and only show level privilege to some devices.what is the neccessary steps required in ACS and ACS clients.Also how m

  • Thumbnail inconsistencies

    Hi, So, I'm still kinda new-ish to the whole Mac thing, and I'm trying to figure out why I'm getting some images' thumbnails appearing as the actual image itself, while others just as a generic thumbnail. Right now, for instance, I've got two jpg's o

  • What is lable in sap-script ?plz explain

    why we can't possible in smartform?