10gR2 archivemode logfile switch hung

Hi there,
I just enable the database to archivemode and try to do a log switch. It just hung the over 30 minutes and still hung.
Can anyone give you input please?
Thanks
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 17
Next log sequence to archive 17
Current log sequence 19
SQL> alter system switch logfile;

I found the problem. There was no error on the alert log file. The error generated after I did a Cntrl C and it generated. The problem is NFS mount.
Thanks your help.

Similar Messages

  • -logfile switch

    Good day all,
    I have been looking at the -logfile switch when invoking dgmgrl to log out the output of some commands.
    I want to try and log out things like show config and show database to be mailed to our help desk to allow them to monitor that the standby db is working.
    Every time I invoke dgmgrl -logfile <File Name> I can't find the file it should be creating.
    I have read the documentation and it does not say that the logfile is put in a different location but it does infer that you need an observer and the -logfile switch (command line switch not redo log switch) can only be used when logging onto an observer.
    Is this true?
    Is the only way to log these commands to use an external command like script?
    Thanks
    Steve

    Thanks Mseberg.
    From what I can see, the observer is utilised to monitor and effect automatic failover when in Max Availability mode.
    We are running in Max Performance and do failover by hand when the decision is taken.
    It looks like the -logfile switch can only be used with the observer as using this switch without an observer no log file is produced which is a shame as it would be very useful to be able to log out show configuration and show database by a job and then e-mail it rather than having to log onto each DG configuration to ensure it is still functioning.
    Is my understanding correct?
    If so then we won't be able to do what I was hoping other than using a linux/unix script command as we won't be using the observer.
    Or can the observer be used but actually switch off the fast_start_failover stuff just so we can use it to monitor the configuration?
    I hope I am making sense.

  • Logfile switch is not causing checkpoint.

    Hi,
    Database : Oracle10g running on solaris
    Doesn't log switch cause checkpoint ?.
    SQL> select NAME,CHECKPOINT_CHANGE#,STATUS from v$datafile;
    NAME CHECKPOINT_CHANGE# STATUS
    /u02/oradata/bihyd2/system.dbf 1831405 SYSTEM
    /u02/oradata/bihyd2/undotbs1.dbf 1831405 ONLINE
    /u02/oradata/bihyd2/sysaux.dbf 1831405 ONLINE
    /u02/oradata/bihyd2/sys01.dbf 1831405 SYSTEM
    /u02/oradata/bihyd2/t1.dbf 1831405 ONLINE
    /u02/oradata/bihyd1/users01.dbf 1831405 ONLINE
    SQL> drop table t1;
    Table dropped.
    SQL> select CURRENT_SCN from v$database;
    CURRENT_SCN
    1865085
    SQL> alter system switch logfile;
    System altered.
    SQL> select NAME,CHECKPOINT_CHANGE#,STATUS from v$datafile;
    NAME CHECKPOINT_CHANGE# STATUS
    /u02/oradata/bihyd2/system.dbf 1831405 SYSTEM
    /u02/oradata/bihyd2/undotbs1.dbf 1831405 ONLINE
    /u02/oradata/bihyd2/sysaux.dbf 1831405 ONLINE
    /u02/oradata/bihyd2/sys01.dbf 1831405 SYSTEM
    /u02/oradata/bihyd2/t1.dbf 1831405 ONLINE
    /u02/oradata/bihyd1/users01.dbf 1831405 ONLINE

    Log switch imply checkpoint but checkpoint does not imply log switch, when you make a log switch it initiate a checkpoint and move to another log file for writing, when a log switch occur every contents within the redo buffer should be written to log file before its reuse though it is already written to log file when it is
    1/3 filled
    after 3 second
    but still it make sure to write alls the contents within cache to disk.Its a slow background process.
    SQL> select name,checkpoint_change#,status
      2    from v$datafile
      3  /
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                     915491 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                    915491 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                   915491 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                     917269 OFFLINE
    SQL> alter system switch logfile
      2  /
    System altered.
    SQL> select name,checkpoint_change#,status
      2    from v$datafile
      3  /
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                     915491 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                    915491 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                   915491 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                     917269 OFFLINEwhile the same is not with checkpoint , checkpoint just flush the dirty buffer data from data buffer cache to disk.
    SQL> alter system checkpoint
      2  /
    System altered.
    SQL> select name,checkpoint_change#,status
      2    from v$datafile
      3  /
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                    921098 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                    921098 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                   921098 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                     917269 OFFLINE
    SQL> alter system checkpoint
      2  /
    System altered.
    SQL> select name,checkpoint_change#,status
      2    from v$datafile
      3  /
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                    921107 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                    921107 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                   921107 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                     917269 OFFLINEIf you wait for a while you will see after flushing the redo buffer data to redo log file you will see
    SQL> alter system switch logfile
      2  /
    System altered.
    SQL> select name,checkpoint_change#,status
      2    from v$datafile
      3  /
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                 921107 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                 921107 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                 921107 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                      917269 OFFLINE
    SQL> /    
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                 921107 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                 921107 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                 921107 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                      917269 OFFLINEPause for sometime
    SQL> /
    NAME                              CHECKPOINT_CHANGE# STATUS
    /u01/app/oracle/oradata/mmi/system01.dbf                     921321 SYSTEM
    /u01/app/oracle/oradata/mmi/sysaux01.dbf                     921321 ONLINE
    /u01/app/oracle/oradata/mmi/undotbs01.dbf                    921321 ONLINE
    /u01/app/oracle/oradata/mmi/users01.dbf                      917269 OFFLINEKhurram

  • Logfile switches too fast,  but not much content inside..

    hi guys,
    Glad to be back.
    I am having this problem at the moment.
    I am on a development db, there isn't alot of actual users.
    I have 5 redolog file of 128mb each.
    However, they are filling up fast -> from v$archived_log
    /rdsdbdata/db/SOCIALSG_A/arch/redolog-36159-1-781327165.arc     36159     28-AUG-12 05:13:57
    /rdsdbdata/db/SOCIALSG_A/arch/redolog-36158-1-781327165.arc     36158     28-AUG-12 05:08:50
    /rdsdbdata/db/SOCIALSG_A/arch/redolog-36157-1-781327165.arc     36157     28-AUG-12 05:03:52
    Almost 5 minute per logswitch.
    But i have no idea why is it filling up so fast, and thus i use logminer to go into the redolog and have a look.
    Inside view v$logmnr_contents, I do not see plenty of transactions (about only 250 rows)..
    so how do i know which is the rows that is actually filling up this 128mb ? 250 rows of data only, it can't be that big -> (128mb?)
    I check individual archivelog file size (blocks * block_size) in v$archived_log and it isn't even 100mb yet.
    Is there any other reason why my logfile are switching so fast ?
    What should i do next?
    Regards,
    Noob

    hello,
    5 minutes per logswitch is not that much. the recommended logswitch frequency is 3/hour.
    I've seen dbs, with several logswitches per minute.
    So that is not really that high. check:
    General Guideline For Sizing The Online Redo Log Files [ID 781999.1]
    as to 250 rows -> 128MB, that means that each row has +-500KB. depends on what is being loaded/updated/etc.
    so you should just prepare for the number of archivelogs that will be generated on average on any given hour or day (space wise)
    also, archivelogs are usually smaller than your redo logs.
    BR,
    Pinela.

  • Want to reduce Log switch time interval !!!

    Friends ,
    I know that the standard LOG SWITCH TIme interval is 20/30 minutes , i.e., every time it is better switch redolog 1 to redolog 2 (or redolog2 to redolog3) within 20/30 minutes.
    But in my production server , Logfile switches within every 60 minutes every time in the peak hour . Now my question , How I can make a situation where my logfile should switch to another logfile between 20/30 minutes .
    Here my database configuration is :
    Oracle database 10g (10.2.0.1.0 version) in AIX 5.3 server
    AND
    SQL> show parameter fast_start_mttr_target
    NAME TYPE VALUE
    fast_start_mttr_target integer 600
    My every redolog file size is = 50 MB
    In this situation , give me advice plz how I can reduce my logswitch time interval ?

    You could either
    a. Recreate your RedoLog files with a smaller size --- which action I would not recommend
    OR
    b. Set the instance parameter ARCHIVE_LAG_TARGET to 1800
    ARCHIVE_LAG_TARGET specifies (in seconds) the duration at which a log switch would be forced, if it hasn't been done so by the online redo log file being full.
    You should be able to use ALTER SYSTEM to change this value.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Log switch and checkpoint - Oracle 11g

    Hi
    I've read a documentations and forum, but I can't find clean explanation. I'd like to ask - does switch log trigger checkpoint? I've heard, that from 8i version checkpoint doesn't occur on log switch, but can't find information
    in documentation.
    Thanks awfully for help.
    Regards

    before 8i , logfile switch caused full checkpoint
    8i and above , logfile switch no longer causes full checkpoint , it causes a "log switch checkpoint"
    A log switch checkpoint write the contents of "some" dirty buffers to the disk

  • Win 2003 Cluster + Oracle Fail Safe + Dataguard (physical & Logical)

    Hello,<br>
    <br>
    It´s my first post (sorry for my bad english)...I am mounting a high availability solution for test purpose. For the moment i mount the following and runs ok, but i´ve a little problem with the logical database:<br>
    <br>
    Configuration<br>
    ESX Server 2.0 with this machines:<br>
    Windows 2003 Cluster (Enterprise Edition R2, 2 nodes)<br>
    * NODE 1 - Oracle 10gR2 + Patch 9 + Oracle Fail Safe 3.3.4<br>
    * NODE 2 - Oracle 10gR2 + Patch 9 + Oracle Fail Safe 3.3.4<br>
    c:/Windows Software<br>
    e:/Oracle Software/ (pfile -> R:/spfile)<br>
    <br>
    Virtual SAN<br>
    * Datafile, Redos.. are in Virtual SAN.<br>
    R:/ Datafiles & Archivers & dump files & spfile<br>
    S:/ , T:/ ,U:/ -> Redos<br>
    V:/ Undo<br>
    <br>
    Data Guard<br>
    * NODE3 Physical Database<br>
    * NODE4 Logical Database<br>
    <br>
    The Oracle Fail Safe and windows cluster run OK, the switchs... <br>
    The physical database runs OK... (redo aply, switchover, failover, all ok) but the logical receives the redos ok but it has a problem when goes to apply the redo.<br>
    <br>
    The error is the following:<br>
    ORA-12801: error señalizado en el servidor P004 de consultas paralelas<br>
    ORA-06550: linea 1, columna 536:<br>
    PLS-00103: se ha encontrado el simbolo "," cuando se esperaba uno de los siguientes:<br>
    (- + case mod new not null <an identifier>
    <a double-quoted delimited-identifier><a bind variable><avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification><a number> > a single-quoted SQL string> pipe <an alternatively quoted string literal with character set specification> <an alternativel.<br>
    update "SYS"."JOB$" set "LAST_DATE"=TO_DATE('11/09/07','DD/MM/RR'),<br>
    <br>
    This sql statement i saw in dba_logstdby_events and was joined with the error in alert log and dba_logstdby_events.<br>
    <br>
    I´m a bit lost with this error. I don´t understand why the logical database can´t start to apply the redos received from primary database.<br>
    <br>
    The database has two tables with two columns one integer and the other a varchar2(25). She hasn´t rare types of columns.<br>
    <br>
    Thanks a lot for any help,<br>
    Roberto Marotta<br>

    I recreate the logical database OK, no problem, no errors.<br>
    <br>
    The redo aply run ok. I have done logfile switch in primary database and they were applied in logical and standby databases. But...<br>
    <br>
    When I created a tablespace in primary database when i did a switch logfile in primary the changes transfers ok to standby database, but to logical NO!!!, the redo are in they path in logical ok, but when the process tried to apply, reports me the same error.<br>
    <br>
    SQL> select sequence#, first_time, next_time, dict_begin, dict_end, applied from dba_logstdby_log order by 1;<BR>
    <BR>
    SEQUENCE# FIRST_TI NEXT_TIM DIC DIC APPLIED<BR>
    --------- -------- -------- --- --- -------<BR>
    138 14/09/07 14/09/07 NO NO CURRENT<BR>
    139 14/09/07 14/09/07 NO NO CURRENT<BR>
    <br>
    SQL> select event_time, status, event from dba_logstdby_events order by event_time, timestamp, commit_scn;<br>
    <br>
    14/09/07<br>
    ORA-16222: reintento automatico de la base de datos logica en espera de la ultima accion<br>
    14/09/07<br>
    ORA-16111: extraccion de log y configuracion de aplicacion<br>
    14/09/07<br>
    ORA-06550: linea 1, columna 536:<br>
    PLS-00103: Se ha encontrado el simbolo "," cuando se esperaba uno de los siguientes:<br>
    ( - + case mod new not null <an identifier><br>
    <a double-quoted delimited-identifier> <a bind variable> avg<br>
    count current exists max min prior sql stddev sum variance<br>
    execute forall merge time tiemstamp interval date<br>
    <a string literal with character set specification><br>
    <a number><a single-quoted SQL string> pipe
    <an alternatively-quoted string literal with charactert set specificastion><br>
    <an alternativel<br>
    update "SYS"."JOB$" set "LAST_NAME" = TO_DATE('14/09/07','DD/MM/RR'),<br>
    <br>
    The alert.log report the same message that the dba_logstdby_events view.<br>
    <br>
    Any idea¿?<br>
    <br>
    I´m a bit frustrated. It´s the third time that recreate the logical database OK and reproduce the same error when i create a tablespace in primary database, and i haven´t got any idea because of that.

  • Problems while creating a physical Standby

    Hi,
    I am trying to setup a physical standby database with oracle 10g.
    I configured a specific log archive destination:
    LOG_ARCHIVE_DEST_3 = 'SERVICE=ORAMPSEC REOPEN=60 MAX_FAILURE=3 LGWR SYNC
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORAMPSEC'
    The service is reachable via network.
    To establish the standby database I copied all datafiles from the primary database using scp. I also created a standby controlfile and a modified pfile. In addition I added a standby redo logfile which has the same size as the online redo log files on the primary database.
    After starting the standby database in open read only mode I receive the following error message:
    Database mounted.
    ORA-16004: backup database requires recovery
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oradata/ORAMPPRD/data/system01.dbf'
    I tried to recover using "recover standby database" I receive the following message:
    ORA-00279: change 884348 generated at 07/18/2006 17:08:07 needed for thread 1
    ORA-00289: suggestion : /oradata/ORAMPPRD/archive/1_30_595767954.dbf
    ORA-00280: change 884348 for thread 1 is in sequence #30
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    Although I transmitted all archived log files from the primary database the archived log file mentioned above is not available. After hitting RETURN I get this message:
    ORA-00308: cannot open archived log
    '/oradata/ORAMPPRD/archive/1_30_595767954.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oradata/ORAMPPRD/data/system01.dbf'
    Here is a listing of all archived logfiles from the primary database:
    bash-3.00$ ls -latr
    total 494144
    drwxr-xr-x 6 oracle oinstall 512 Jul 14 11:03 ..
    -rw-r----- 1 oracle oinstall 20732928 Jul 17 11:47 1_6_595767954.dbf
    -rw-r----- 1 oracle oinstall 86013440 Jul 17 13:56 1_7_595767954.dbf
    -rw-r----- 1 oracle oinstall 214016 Jul 17 13:57 1_8_595767954.dbf
    -rw-r----- 1 oracle oinstall 1986560 Jul 17 14:10 1_9_595767954.dbf
    -rw-r----- 1 oracle oinstall 150016 Jul 17 14:10 1_10_595767954.dbf
    -rw-r----- 1 oracle oinstall 504320 Jul 17 14:17 1_11_595767954.dbf
    -rw-r----- 1 oracle oinstall 1807872 Jul 17 14:22 1_12_595767954.dbf
    -rw-r----- 1 oracle oinstall 589824 Jul 17 14:25 1_13_595767954.dbf
    -rw-r----- 1 oracle oinstall 1190912 Jul 17 14:37 1_14_595767954.dbf
    -rw-r----- 1 oracle oinstall 584704 Jul 17 14:42 1_15_595767954.dbf
    -rw-r----- 1 oracle oinstall 80896 Jul 17 14:45 1_16_595767954.dbf
    -rw-r----- 1 oracle oinstall 6050816 Jul 17 15:08 1_17_595767954.dbf
    -rw-r----- 1 oracle oinstall 4238848 Jul 17 16:04 1_18_595767954.dbf
    -rw-r----- 1 oracle oinstall 4920832 Jul 17 17:21 1_19_595767954.dbf
    -rw-r----- 1 oracle oinstall 1520128 Jul 17 17:30 1_20_595767954.dbf
    -rw-r----- 1 oracle oinstall 360960 Jul 17 17:35 1_21_595767954.dbf
    -rw-r----- 1 oracle oinstall 89186304 Jul 18 10:52
    1_22_595767954.dbf
    -rw-r----- 1 oracle oinstall 16216576 Jul 18 14:18
    1_23_595767954.dbf
    -rw-r----- 1 oracle oinstall 12288 Jul 18 14:18 1_24_595767954.dbf
    -rw-r--r-- 1 oracle oinstall 2073 Jul 18 14:26 sqlnet.log
    -rw-r----- 1 oracle oinstall 14387200 Jul 18 16:56
    1_25_595767954.dbf
    -rw-r----- 1 oracle oinstall 116736 Jul 18 16:58 1_26_595767954.dbf
    -rw-r----- 1 oracle oinstall 1536000 Jul 18 17:04 1_27_595767954.dbf
    -rw-r----- 1 oracle oinstall 156672 Jul 18 17:05 1_28_595767954.dbf
    drwxr-xr-x 2 oracle oinstall 1024 Jul 18 17:08 .
    -rw-r----- 1 oracle oinstall 65536 Jul 18 17:08 1_29_595767954.dbf
    Nothing known about the *30*dbf file.
    Although there still seems to be something wrong, the archived log file transmission seems to work since there is no error reported on the log archive destination:
    SQL> select status, error from v$archive_dest where dest_id = 3;
    STATUS ERROR
    VALID
    And after I manually force a log switch I see that archived redo logs were applied on the standby database:
    select sequence#, applied from v$archived_log order by sequence#;
    SEQUENCE# APPLIED
    27 YES
    28 NO
    28 YES
    29 NO
    29 YES
    Unfortunately because of the recovering problem I cannot open the database to see if the changes were applied. So my first question is, how can I get the standby database completely recovered ??
    In addition to this I was expecting that the changes to the standby database were made immediately since I choosed LGWR sync but I have to manually force the logfile switch. As I already mentioned a standby redo log file is available as well.
    Thanks for your help,
    Philipp.

    Hi,
    since it does not seem to work with just copying the datafiles I switched to RMAN. I created a full database backup from Enterprise Manager. Similiar to http://dizwell.com/main/content/view/81/122/1/1/ I tried to duplicate the database to the standby instance (running on a different server). But unfortunately I receive error messages that the files, previously created, cannot be found:
    channel d1: starting datafile backupset restore channel d1: specifying datafile(s) to restore from backup set restoring datafile 00001 to /oradata/ORAMPPRD/data/1.dbf restoring datafile 00002 to /oradata/ORAMPPRD/data/2.dbf restoring datafile 00003 to /oradata/ORAMPPRD/data/3.dbf restoring datafile 00004 to /oradata/ORAMPPRD/data/4.dbf channel d1: reading from backup piece /opt/oracle/ora10/product/10.2.0.1.0/dbs/0bhoiq60_1_1
    ORA-19870: error reading backup
    piece /opt/oracle/ora10/product/10.2.0.1.0/dbs/0bhoiq60_1_1
    ORA-19505: failed to identify file
    "/opt/oracle/ora10/product/10.2.0.1.0/dbs/0bhoiq60_1_1"
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory Additional information: 3 failover to previous backup
    But when I crosscheck my backup inside RMAN it clearly shows the backup files:
    RMAN> crosscheck backup;
    backup piece
    handle=/opt/oracle/ora10/product/10.2.0.1.0/dbs/0bhoiq60_1_1 recid=7
    stamp=596207811
    crosschecked backup piece: found to be 'AVAILABLE'
    I already checked the file permissions, everybody on the system is able to access this file.
    Do you know what is going wrong here ??
    Cheers,
    Philipp.

  • Help modifying a powershell script

    Hello,
    I have recently been given a task to write/find a script that is capable of performing Full and Incremental backups. I found a script that does exactly what I need, however, it requires user input. I need this to be a scheduled task and therefore I need
    the input to be a static path. Here is the script I am talking about:
    #region Params
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$false,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $HashPath,
    [Parameter(Position=3, Mandatory=$false,ValueFromPipeline=$false)]
    [ValidateSet("Full","Incremental","Differential")] 
    [System.String]
    $BackupType="Full",
    [Parameter(Position=4, Mandatory=$false,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]  
    [System.String]
    $LogFile=".\Backup-Files.log",
    [Parameter(Position=5, Mandatory=$false,ValueFromPipeline=$false)]
    [System.Management.Automation.SwitchParameter]
    $SwitchToFull
    #endregion 
    begin{
    function Write-Log
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $Message,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $LogFile
    #endregion
    try{
    Write-Host $Message
    Out-File -InputObject $Message -Append $LogFile
    catch {throw $_}
    function Get-Hash 
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $HashTarget,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateSet("File","String")]
    [System.String]
    $HashType
    #endregion
    begin{
    try{ $objGetHashMD5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider } 
    catch {throw $_ }
    process{
    try {
    #Checking hash target is file or just string
    switch($HashType){
    "String" {
    $objGetHashUtf8 = New-Object -TypeName System.Text.UTF8Encoding
    $arrayGetHashHash = $objGetHashMD5.ComputeHash($objGetHashUtf8.GetBytes($HashTarget.ToUpper()))
    break
    "File" {
    $arrayGetHashHash = $objGetHashMD5.ComputeHash([System.IO.File]::ReadAllBytes($HashTarget))
    break
    #Return hash
    Write-Output $([System.Convert]::ToBase64String($arrayGetHashHash))
    catch { throw $_ }
    function Copy-File
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Any'})] 
    [System.String]
    $SourceFile,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()] 
    [System.String]
    $DestFile
    #endregion
    try{
    #The script fails when folder being copied to file. So the item will be removed to avoid the error.
    if(Test-Path -LiteralPath $DestFile -PathType Any){
    Remove-Item -LiteralPath $DestFile -Force -Recurse
    #Creating destination if doesn't exist. It's required because Copy-Item doesn't create destination folder
    if(Test-Path -LiteralPath $SourceFile -PathType Leaf){
    New-Item -ItemType "File" -Path $DestFile -Force
    #Copying file to destination directory
    Copy-Item -LiteralPath $SourceFile -Destination $DestFile -Force
    catch{ throw $_ }
    function Backup-Files
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()] 
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNull()] 
    [System.Collections.Hashtable]
    $HashTable
    #endregion
    try{
    $xmlBackupFilesHashFile = $HashTable
    Write-Host "Backup started" 
    Get-ChildItem -Recurse -Path $SourceDir|ForEach-Object{
    $currentBackupFilesItem = $_
    #Full path to source and destination item
    $strBackupFilesSourceFullPath = $currentBackupFilesItem.FullName
    $strBackupFilesDestFullPath = $currentBackupFilesItem.FullName.Replace($SourceDir,$DestDir)
    #Checking that the current item is file and not directory. True - the item is file. 
    $bBackupFilesFile = $($($currentBackupFilesItem.Attributes -band [System.IO.FileAttributes]::Directory) -ne [System.IO.FileAttributes]::Directory)
    Write-Host -NoNewline ">>>Processing item $strBackupFilesSourceFullPath..."
    #Generating path hash
    $hashBackupFilesPath = $(Get-Hash -HashTarget $strBackupFilesSourceFullPath -HashType "String")
    $hashBackupFilesFile = "d"
    #If the item is file then generate hash for file content
    if($bBackupFilesFile){
    $hashBackupFilesFile = $(Get-Hash -HashTarget $strBackupFilesSourceFullPath -HashType "File")
    #Checking that the file has been copied
    if($xmlBackupFilesHashFile[$hashBackupFilesPath] -ne $hashBackupFilesFile){
    Write-Host -NoNewline $("hash changed=>$hashBackupFilesFile...")
    Copy-File -SourceFile $strBackupFilesSourceFullPath $strBackupFilesDestFullPath|Out-Null
    #Returning result
    Write-Output @{$hashBackupFilesPath=$hashBackupFilesFile}
    else{
    Write-Host -NoNewline "not changed..."
    Write-Host "done"
    Write-Host "Backup completed"
    catch { throw $_ }
    function Backup-Full
    [CmdletBinding()]
    [OutputType([System.String])]
    #region Params
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()] 
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()] 
    [System.String]
    $HashFile,
    [Parameter(Position=3, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]  
    [System.String]
    $ChainKey
    #endregion
    try{
    #Creating an empty hash table
    $xmlBackupFullHashFile = @{}
    #Starting directory lookup 
    $uintBackupFullCount = 0
    Backup-Files -SourceDir $SourceDir -DestDir $("$DestDir\$ChainKey\Full_" + $(Get-Date -Format "ddMMyyyy")) -HashTable $xmlBackupFullHashFile|`
    ForEach-Object{ 
    $xmlBackupFullHashFile.Add([string]$_.Keys,[string]$_.Values) 
    $uintBackupFullCount++
    #Saving chain key.
    $xmlBackupFullHashFile.Add("ChainKey",$ChainKey)
    Write-Host -NoNewline "Saving XML file to $HashFile..."
    Export-Clixml -Path $HashFile -InputObject $xmlBackupFullHashFile -Force
    Write-Host "done"
    Write-Output $uintBackupFullCount
    catch { throw $_ }
    function Backup-Diff
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()] 
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'leaf'})]
    [System.String]
    $HashFile
    #endregion
    try{
    #Loading hash table
    $xmlBackupDiffHashFile = Import-Clixml $HashFile
    $chainKeyBackupDiffDifferential = $xmlBackupDiffHashFile["ChainKey"]
    $uintBackupDiffCount = 0
    #Starting directory lookup 
    Backup-Files -SourceDir $SourceDir -DestDir $("$DestDir\$chainKeyBackupDiffDifferential\Differential_" + $(Get-Date -Format "ddMMyyyy.HHmm")) -HashTable $xmlBackupDiffHashFile|`
    ForEach-Object{ $uintBackupDiffCount++ }
    Write-Output $uintBackupDiffCount
    catch { throw $_ }
    function Backup-Inc
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})] 
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()] 
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'leaf'})]
    [System.String]
    $HashFile
    #endregion
    try{
    #Loading hash table
    $xmlBackupIncHashFile = Import-Clixml $HashFile
    $chainKeyBackupIncIncremental = $xmlBackupIncHashFile["ChainKey"]
    $uintBackupIncCount = 0
    #Starting directory lookup 
    Backup-Files -SourceDir $SourceDir -DestDir $("$DestDir\$chainKeyBackupIncIncremental\Incremental_" + $(Get-Date -Format "ddMMyyyy.HHmm")) -HashTable $xmlBackupIncHashFile|`
    ForEach-Object{ 
    $xmlBackupIncHashFile[[string]$_.Keys]=[string]$_.Values
    $uintBackupIncCount++
    Write-Host -NoNewline "Saving XML file to $HashFile..."
    Export-Clixml -Path $HashFile -InputObject $xmlBackupIncHashFile -Force
    Write-Host "Done"
    Write-Output $uintBackupIncCount
    catch { throw $_ }
    #0 - is OK. 1 - some error
    $exitValue=0
    process{
    try{
    $filesCopied=0
    $strSourceFolderName = $(Get-Item $SourceDir).Name
    $strHasFile = $("$HashPath\Hash_$strSourceFolderName.xml")
    $strMessage = $($(Get-Date -Format "HH:mm_dd.MM.yyyy;") + "$BackupType backup of $SourceDir started")
    #Automatically switch to full backup
    $bSwitch = $(!$(Test-Path -LiteralPath $strHasFile -PathType "Leaf") -and $SwitchToFull)
    Write-Log -Message $strMessage -LogFile $LogFile
    switch($true){
    $($BackupType -eq "Full" -or $bSwitch) {
    $filesCopied = Backup-Full -SourceDir $SourceDir -DestDir $DestDir -HashFile $strHasFile -ChainKey $("Backup_$strSourceFolderName" + "_" + $(Get-Date -Format "ddMMyyyy"))
    break
    $($BackupType -eq "Incremental") {
    $filesCopied = Backup-Inc -SourceDir $SourceDir -DestDir $DestDir -HashFile $strHasFile 
    break
    $($BackupType -eq "Differential") {
    $filesCopied = Backup-Diff -SourceDir $SourceDir -DestDir $DestDir -HashFile $strHasFile 
    break
    $strMessage = $($(Get-Date -Format "HH:mm_dd.MM.yyyy;") + "$BackupType backup of $SourceDir completed successfully. $filesCopied items were copied.")
    Write-Log -Message $strMessage -LogFile $LogFile
    Write-Output $filesCopied
    catch { 
    $strMessage = $($(Get-Date -Format "HH:mm_dd.MM.yyyy;") + "$BackupType backup of $SourceDir failed:" + $_)
    Write-Log -Message $strMessage -LogFile $LogFile
    $exitValue = 1
    end{exit $exitValue}
    I have some experience writing Powershell scripts,but I am lost at how this script prompts for Source and Destination paths. I tried modifying the Param section, but this didnt work and up until now I thought the only way you could get a prompt was with
    "read-host". Any and all education on this matter would be greatly appreciated. (Side note: I have posted this question  on the forum in which I found it and have not got an answer yet).
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$false,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $HashPath,
    [Parameter(Position=3, Mandatory=$false,ValueFromPipeline=$false)]
    [ValidateSet("Full","Incremental","Differential")]
    [System.String]
    $BackupType="Full",
    [Parameter(Position=4, Mandatory=$false,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $LogFile=".\Backup-Files.log",
    [Parameter(Position=5, Mandatory=$false,ValueFromPipeline=$false)]
    [System.Management.Automation.SwitchParameter]
    $SwitchToFull
    #endregion
    begin{
    function Write-Log
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $Message,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $LogFile
    #endregion
    try{
    Write-Host $Message
    Out-File -InputObject $Message -Append $LogFile
    catch {throw $_}
    function Get-Hash
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $HashTarget,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateSet("File","String")]
    [System.String]
    $HashType
    #endregion
    begin{
    try{ $objGetHashMD5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider }
    catch {throw $_ }
    process{
    try {
    #Checking hash target is file or just string
    switch($HashType){
    "String" {
    $objGetHashUtf8 = New-Object -TypeName System.Text.UTF8Encoding
    $arrayGetHashHash = $objGetHashMD5.ComputeHash($objGetHashUtf8.GetBytes($HashTarget.ToUpper()))
    break
    "File" {
    $arrayGetHashHash = $objGetHashMD5.ComputeHash([System.IO.File]::ReadAllBytes($HashTarget))
    break
    #Return hash
    Write-Output $([System.Convert]::ToBase64String($arrayGetHashHash))
    catch { throw $_ }
    function Copy-File
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Any'})]
    [System.String]
    $SourceFile,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $DestFile
    #endregion
    try{
    #The script fails when folder being copied to file. So the item will be removed to avoid the error.
    if(Test-Path -LiteralPath $DestFile -PathType Any){
    Remove-Item -LiteralPath $DestFile -Force -Recurse
    #Creating destination if doesn't exist. It's required because Copy-Item doesn't create destination folder
    if(Test-Path -LiteralPath $SourceFile -PathType Leaf){
    New-Item -ItemType "File" -Path $DestFile -Force
    #Copying file to destination directory
    Copy-Item -LiteralPath $SourceFile -Destination $DestFile -Force
    catch{ throw $_ }
    function Backup-Files
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNull()]
    [System.Collections.Hashtable]
    $HashTable
    #endregion
    try{
    $xmlBackupFilesHashFile = $HashTable
    Write-Host "Backup started"
    Get-ChildItem -Recurse -Path $SourceDir|ForEach-Object{
    $currentBackupFilesItem = $_
    #Full path to source and destination item
    $strBackupFilesSourceFullPath = $currentBackupFilesItem.FullName
    $strBackupFilesDestFullPath = $currentBackupFilesItem.FullName.Replace($SourceDir,$DestDir)
    #Checking that the current item is file and not directory. True - the item is file.
    $bBackupFilesFile = $($($currentBackupFilesItem.Attributes -band [System.IO.FileAttributes]::Directory) -ne [System.IO.FileAttributes]::Directory)
    Write-Host -NoNewline ">>>Processing item $strBackupFilesSourceFullPath..."
    #Generating path hash
    $hashBackupFilesPath = $(Get-Hash -HashTarget $strBackupFilesSourceFullPath -HashType "String")
    $hashBackupFilesFile = "d"
    #If the item is file then generate hash for file content
    if($bBackupFilesFile){
    $hashBackupFilesFile = $(Get-Hash -HashTarget $strBackupFilesSourceFullPath -HashType "File")
    #Checking that the file has been copied
    if($xmlBackupFilesHashFile[$hashBackupFilesPath] -ne $hashBackupFilesFile){
    Write-Host -NoNewline $("hash changed=>$hashBackupFilesFile...")
    Copy-File -SourceFile $strBackupFilesSourceFullPath $strBackupFilesDestFullPath|Out-Null
    #Returning result
    Write-Output @{$hashBackupFilesPath=$hashBackupFilesFile}
    else{
    Write-Host -NoNewline "not changed..."
    Write-Host "done"
    Write-Host "Backup completed"
    catch { throw $_ }
    function Backup-Full
    [CmdletBinding()]
    [OutputType([System.String])]
    #region Params
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $HashFile,
    [Parameter(Position=3, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $ChainKey
    #endregion
    try{
    #Creating an empty hash table
    $xmlBackupFullHashFile = @{}
    #Starting directory lookup
    $uintBackupFullCount = 0
    Backup-Files -SourceDir $SourceDir -DestDir $("$DestDir\$ChainKey\Full_" + $(Get-Date -Format "ddMMyyyy")) -HashTable $xmlBackupFullHashFile|`
    ForEach-Object{
    $xmlBackupFullHashFile.Add([string]$_.Keys,[string]$_.Values)
    $uintBackupFullCount++
    #Saving chain key.
    $xmlBackupFullHashFile.Add("ChainKey",$ChainKey)
    Write-Host -NoNewline "Saving XML file to $HashFile..."
    Export-Clixml -Path $HashFile -InputObject $xmlBackupFullHashFile -Force
    Write-Host "done"
    Write-Output $uintBackupFullCount
    catch { throw $_ }
    function Backup-Diff
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'leaf'})]
    [System.String]
    $HashFile
    #endregion
    try{
    #Loading hash table
    $xmlBackupDiffHashFile = Import-Clixml $HashFile
    $chainKeyBackupDiffDifferential = $xmlBackupDiffHashFile["ChainKey"]
    $uintBackupDiffCount = 0
    #Starting directory lookup
    Backup-Files -SourceDir $SourceDir -DestDir $("$DestDir\$chainKeyBackupDiffDifferential\Differential_" + $(Get-Date -Format "ddMMyyyy.HHmm")) -HashTable $xmlBackupDiffHashFile|`
    ForEach-Object{ $uintBackupDiffCount++ }
    Write-Output $uintBackupDiffCount
    catch { throw $_ }
    function Backup-Inc
    #region Params
    [CmdletBinding()]
    [OutputType([System.String])]
    param(
    [Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'Container'})]
    [System.String]
    $SourceDir,
    [Parameter(Position=1, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateNotNullOrEmpty()]
    [System.String]
    $DestDir,
    [Parameter(Position=2, Mandatory=$true,ValueFromPipeline=$false)]
    [ValidateScript({Test-Path -LiteralPath $_ -PathType 'leaf'})]
    [System.String]
    $HashFile
    #endregion
    try{
    #Loading hash table
    $xmlBackupIncHashFile = Import-Clixml $HashFile
    $chainKeyBackupIncIncremental = $xmlBackupIncHashFile["ChainKey"]
    $uintBackupIncCount = 0
    #Starting directory lookup
    Backup-Files -SourceDir $SourceDir -DestDir $("$DestDir\$chainKeyBackupIncIncremental\Incremental_" + $(Get-Date -Format "ddMMyyyy.HHmm")) -HashTable $xmlBackupIncHashFile|`
    ForEach-Object{
    $xmlBackupIncHashFile[[string]$_.Keys]=[string]$_.Values
    $uintBackupIncCount++
    Write-Host -NoNewline "Saving XML file to $HashFile..."
    Export-Clixml -Path $HashFile -InputObject $xmlBackupIncHashFile -Force
    Write-Host "Done"
    Write-Output $uintBackupIncCount
    catch { throw $_ }
    #0 - is OK. 1 - some error
    $exitValue=0
    process{
    try{
    $filesCopied=0
    $strSourceFolderName = $(Get-Item $SourceDir).Name
    $strHasFile = $("$HashPath\Hash_$strSourceFolderName.xml")
    $strMessage = $($(Get-Date -Format "HH:mm_dd.MM.yyyy;") + "$BackupType backup of $SourceDir started")
    #Automatically switch to full backup
    $bSwitch = $(!$(Test-Path -LiteralPath $strHasFile -PathType "Leaf") -and $SwitchToFull)
    Write-Log -Message $strMessage -LogFile $LogFile
    switch($true){
    $($BackupType -eq "Full" -or $bSwitch) {
    $filesCopied = Backup-Full -SourceDir $SourceDir -DestDir $DestDir -HashFile $strHasFile -ChainKey $("Backup_$strSourceFolderName" + "_" + $(Get-Date -Format "ddMMyyyy"))
    break
    $($BackupType -eq "Incremental") {
    $filesCopied = Backup-Inc -SourceDir $SourceDir -DestDir $DestDir -HashFile $strHasFile
    break
    $($BackupType -eq "Differential") {
    $filesCopied = Backup-Diff -SourceDir $SourceDir -DestDir $DestDir -HashFile $strHasFile
    break
    $strMessage = $($(Get-Date -Format "HH:mm_dd.MM.yyyy;") + "$BackupType backup of $SourceDir completed successfully. $filesCopied items were copied.")
    Write-Log -Message $strMessage -LogFile $LogFile
    Write-Output $filesCopied
    catch {
    $strMessage = $($(Get-Date -Format "HH:mm_dd.MM.yyyy;") + "$BackupType backup of $SourceDir failed:" + $_)
    Write-Log -Message $strMessage -LogFile $LogFile
    $exitValue = 1
    end{exit $exitValue}

    Hi Ryan Blaeholder,
    Thanks for your posting.
    To schedule a powershell script with input value, instead of modifying the script above, you can also try to add the input during creating a scheduled task like this:(save the script above as D:\backup.ps1)
    -command "& 'D:\backup.ps1' 'input1' 'input2'"
    For more detailed information, please refer to this article to complete:
    Schedule PowerShell Scripts that Require Input Values:
    http://blogs.technet.com/b/heyscriptingguy/archive/2011/01/12/schedule-powershell-scripts-that-require-input-values.aspx
    I hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Abcdef

    to_date('03-23-2010','mm-dd-yyyy')
    to_date('2008-06-08','yyyy-mm-dd')
    DBMS_OUTPUT.PUT_LINE(' 4th Where clause: ' || WHERE_CLAUSE);
    HKey_Local Machine -> Software -> Microsoft -> MSLicensing
    topas
    Removing batch of Files in linux:
    =====================================
    find . -name "*.arc" -mtime +20 -exec rm -f {} \;
    find . -name "*.dbf" -mtime +60 -exec mv {} /backup/Arch_Bkp_02May11/ \;
    ALTER DATABASE
    SET STANDBY DATABASE TO MAXIMIZE {AVAILABILITY | PERFORMANCE | PROTECTION};
    ================================================================================
    Find top N records:
    ===================
    select * from (select ename from emp order by sal)
    where rownum <=n;
    Find top Nth record: (n=0 for 1st highest)
    =========================================
    select * from emp a
    where (n =
    (select count(distinct b.sal) from emp b
    where b.sal > a.sal));
    Query for Listing last n records from the table
    =================================================
    select * from (select * from emp order by rownum desc) where rownum<4
    HOW TO tablespace wise and file wise info
    ============================
    col file_name for a45
    col tablespace_name for a15
    set linesize 132
    select a.tablespace_name,a.file_name,a.AUTOEXTENSIBLE,----a.status,
    round(a.bytes/1024/1024,2) Total_MB,
    round(sum(b.bytes)/1024/1024,2) Free_MB,
    round((a.bytes/1024/1024 - sum(b.bytes)/1024/1024),2) Used_MB
    from dba_data_files a,dba_free_space b
    where a.file_id=b.file_id
    and a.tablespace_name=b.tablespace_name
    group by a.tablespace_name,b.file_id,a.file_name,a.bytes,a.AUTOEXTENSIBLE--,a.status
    order by tablespace_name;
    col tablespace_name for a15
    SELECT tablespace_name,ts_#,num_files,sum_free_mbytes,count_blocks,max_mbytes,
    sum_alloc_mbytes,DECODE(sum_alloc_mbytes,0,0,100 * sum_free_mbytes /sum_alloc_mbytes ) AS pct_free
    FROM (SELECT v.name AS tablespace_name,ts# AS ts_#,
    NVL(SUM(bytes)/1048576,0) AS sum_alloc_mbytes,
    NVL(COUNT(file_name),0) AS num_files
    FROM dba_data_files f,v$tablespace v
    WHERE v.name = f.tablespace_name (+)
    GROUP BY v.name,ts#),
    (SELECT v.name AS fs_ts_name,ts#,NVL(MAX(bytes)/1048576,0) AS max_mbytes,
    NVL(COUNT(BLOCKS) ,0) AS count_blocks,
    NVL(SUM(bytes)/1048576,0) AS sum_free_mbytes
    FROM dba_free_space f,v$tablespace v
    WHERE v.name = f.tablespace_name(+)
    GROUP BY v.name,ts#)
    WHERE tablespace_name = fs_ts_name
    ORDER BY tablespace_name;
    ==================================
    col file_name for a45
    col tablespace_name for a15
    set linesize 132
    select a.tablespace_name,a.file_name,a.AUTOEXTENSIBLE,----a.status,
    round(a.bytes/1024/1024,2) Total_MB,
    round(sum(b.bytes)/1024/1024,2) Free_MB,
    round((a.bytes/1024/1024 - sum(b.bytes)/1024/1024),2) Used_MB
    from dba_data_files a,dba_free_space b
    where a.file_id=b.file_id
    and a.tablespace_name=b.tablespace_name
    group by a.tablespace_name,b.file_id,a.file_name,a.bytes,a.AUTOEXTENSIBLE--,a.status
    order by file_name;
    =============================================================
    HOW TO FIND CHILD TABLES
    ===========================================
    col column_name for a30
    col owner for a10
    set linesize 132
    select --a.table_name parent_table,
    b.owner,
    b.table_name child_table
    , a.constraint_name , b.constraint_name
    from dba_constraints a ,dba_constraints b
    where a.owner='LEIQA20091118'
    and a.constraint_name = b.r_constraint_name
    --and b.constraint_type = 'R'
    and a.constraint_type IN ('P','U')
    and a.table_name =upper('&tabname');
    List foreign keys and referenced table and columns:
    ======================================================
    SELECT DECODE(c.status,'ENABLED','C','c') t,
    SUBSTR(c.constraint_name,1,31) relation,
    SUBSTR(cc.column_name,1,24) columnname,
    SUBSTR(p.table_name,1,20) tablename
    FROM user_cons_columns cc, user_constraints p,
    user_constraints c
    WHERE c.table_name = upper('&table_name')
    AND c.constraint_type = 'R'
    AND p.constraint_name = c.r_constraint_name
    AND cc.constraint_name = c.constraint_name
    AND cc.table_name = c.table_name
    UNION ALL
    SELECT DECODE(c.status,'ENABLED','P','p') t,
    SUBSTR(c.constraint_name,1,31) relation,
    SUBSTR(cc.column_name,1,24) columnname,
    SUBSTR(c.table_name,1,20) tablename
    FROM user_cons_columns cc, user_constraints p,
    user_constraints c
    WHERE p.table_name = upper('PERSON')
    AND p.constraint_type in ('P','U')
    AND c.r_constraint_name = p.constraint_name
    AND c.constraint_type = 'R'
    AND cc.constraint_name = c.constraint_name
    AND cc.table_name = c.table_name
    ORDER BY 1, 4, 2, 3
    List a child table's referential constraints and their associated parent table:
    ==============================================================
    SELECT t.owner CHILD_OWNER,
    t.table_name CHILD_TABLE,
    t.constraint_name FOREIGN_KEY_NAME,
    r.owner PARENT_OWNER,
    r.table_name PARENT_TABLE,
    r.constraint_name PARENT_CONSTRAINT
    FROM user_constraints t, user_constraints r
    WHERE t.r_constraint_name = r.constraint_name
    AND t.r_owner = r.owner
    AND t.constraint_type='R'
    AND t.table_name = <child_table_name>;
    parent tables:
    ================
    select constraint_name,constraint_type,r_constraint_name
    from dba_constraints
    where table_name ='TM_PAY_BILL'
    and constraint_type in ('R');
    select CONSTRAINT_NAME,TABLE_NAME,COLUMN_NAME from user_cons_columns where table_name='FS_FR_TERMINALLOCATION';
    select a.OWNER,a.TABLE_NAME,a.CONSTRAINT_NAME,a.CONSTRAINT_TYPE
    ,b.COLUMN_NAME,b.POSITION
    from dba_constraints a,dba_cons_columns b
    where a.CONSTRAINT_NAME=b.CONSTRAINT_NAME
    and a.TABLE_NAME=b.TABLE_NAME
    and a.table_name=upper('TM_GEN_INSTRUCTION')
    and a.constraint_type in ('P','U');
    select constraint_name,constraint_type,r_constraint_name
    from dba_constraints
    where table_name ='TM_PAY_BILL'
    and constraint_type in ('R');
    ===============================================
    HOW TO FIND INDEXES
    =====================================
    col column_name for a30
    col owner for a25
    select a.owner,a.index_name, --a.table_name,a.tablespace_name,
    b.column_name,b.column_position
    from dba_indexes a,dba_ind_columns b
    where a.owner='SCE'
    and a.index_name=b.index_name
    and a.table_name = upper('&tabname')
    order by a.index_name,b.column_position;
    col column_name for a40
    col index_owner for a15
    select index_owner,index_name,column_name,
    column_position from dba_ind_columns
    where table_owner= upper('VISILOGQA19') and table_name ='TBLTRANSACTIONGROUPMAIN';
    -- check for index on FK
    ===============================
    set linesize 121
    col status format a6
    col columns format a30 word_wrapped
    col table_name format a30 word_wrapped
    SELECT DECODE(b.table_name, NULL, 'Not Indexed', 'Indexed' ) STATUS, a.table_name, a.columns, b.columns from (
    SELECT SUBSTR(a.table_name,1,30) table_name,
    SUBSTR(a.constraint_name,1,30) constraint_name, MAX(DECODE(position, 1,
    SUBSTR(column_name,1,30),NULL)) || MAX(DECODE(position, 2,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 3,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 4,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 5,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 6,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 7,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 8,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 9,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,10,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,11,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,12,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,13,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,14,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,15,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,16,', '|| SUBSTR(column_name,1,30),NULL)) columns
    from user_cons_columns a, user_constraints b
    WHERE a.constraint_name = b.constraint_name
    AND constraint_type = 'R'
    GROUP BY SUBSTR(a.table_name,1,30), SUBSTR(a.constraint_name,1,30) ) a, (
    SELECT SUBSTR(table_name,1,30) table_name,
    SUBSTR(index_name,1,30) index_name, MAX(DECODE(column_position, 1,
    SUBSTR(column_name,1,30),NULL)) || MAX(DECODE(column_position, 2,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 3,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 4,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 5,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 6,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 7,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 8,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 9,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,10,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,11,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,12,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,13,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,14,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,15,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,16,', '||SUBSTR(column_name,1,30),NULL)) columns
    from user_ind_columns group by SUBSTR(table_name,1,30), SUBSTR(index_name,1,30) ) b
    where a.table_name = b.table_name (+) and b.columns (+) like a.columns || '%';
    ==================================================
    HOW TO FIND unique keys
    ===========================
    col column_name for a30
    col owner for a10
    set linesize 132
    select a.owner , --a.table_name,
    a.constraint_name,a.constraint_type,
    b.column_name,b.position
    from dba_constraints a, dba_cons_columns b
    where a.table_name = upper('&tabname')
    and a.constraint_name = b.constraint_name
    and a.constraint_type in ('P','U')
    and a.owner=b.owner
    order by a.owner,a.constraint_name,b.position;
    ==================================
    HOW TO FIND ROWlocks
    ======================
    col object_name for a30
    col terminal for a20
    set linesize 1000
    col spid for a10
    col osuser for a15
    select to_char(logon_time,'DD-MON-YYYY HH24:MI:SS'),OSUSER,--owner,
    s.sid, s.serial#,p.spid,
    s.terminal,l.locked_mode,o.object_name,l.ORACLE_USERNAME --,o.object_type
    from v$session s, dba_objects o,v$locked_object l, V$process p
    where o.object_id=l.object_id
    and s.sid=l.session_id
    and s.paddr=p.addr
    order by logon_time;
    SELECT OWNER||'.'||OBJECT_NAME AS Object, OS_USER_NAME, ORACLE_USERNAME,
    PROGRAM, NVL(lockwait,'ACTIVE') AS Lockwait,DECODE(LOCKED_MODE, 2,
    'ROW SHARE', 3, 'ROW EXCLUSIVE', 4, 'SHARE', 5,'SHARE ROW EXCLUSIVE',
    6, 'EXCLUSIVE', 'UNKNOWN') AS Locked_mode, OBJECT_TYPE, SESSION_ID, SERIAL#, c.SID
    FROM SYS.V_$LOCKED_OBJECT A, SYS.ALL_OBJECTS B, SYS.V_$SESSION c
    WHERE A.OBJECT_ID = B.OBJECT_ID AND C.SID = A.SESSION_ID
    ORDER BY Object ASC, lockwait DESC;
    SELECT DECODE(request,0,'Holder: ','Waiter: ')||sid sess,
    id1, id2, lmode, request, type
    FROM V$LOCK
    WHERE (id1, id2, type) IN
    (SELECT id1, id2, type FROM V$LOCK WHERE request>0)
    ORDER BY id1, request;
    find locks
    =====================
    set linesize 1000
    SELECT --osuser,
    a.username,a.serial#,a.sid,--a.terminal,
    sql_text
    from v$session a, v$sqltext b, V$process p
    where a.sql_address =b.address
    and a.paddr = p.addr
    and p.spid = '&os_pid'
    order by address, piece;
    select sql_text
    from V$sqltext_with_newlines
    where address =
    (select prev_sql_addr
    from V$session
    where username = :uname and sid = :snum) ORDER BY piece
    set pagesize 50000
    set linesize 30000
    set long 500000
    set head off
    select s.username su,s.sid,s.serial#,substr(sa.sql_text,1,540) txt
    from v$process p,v$session s,v$sqlarea sa
    where p.addr=s.paddr
    and s.username is not null
    and s.sql_address=sa.address(+)
    and s.sql_hash_value=sa.hash_value(+)
    and spid=&SPID;
    privileges
    ===========
    select * from dba_sys_privs where grantee = 'SCE';
    select * from dba_role_privs where grantee = 'SCE'
    select * from dba_sys_privs where grantee in ('CONNECT','APPL_CONNECT');
    Check high_water_mark_statistics
    ===================================
    select * from DBA_HIGH_WATER_MARK_STATISTICS;
    Multiple Blocksizes:
    =========================
    alter system set db_16k_cache_size=64m;
    create tablespace index_ts datafile '/data1/index_ts01.dbf' size 10240m blocksize 16384;
    11g default profiles:
    ========================
    alter profile default limit password_lock_time unlimited;
    alter profile default limit password_life_time unlimited;
    alter profile default limit password_grace_time unlimited;
    logfile switch over:
    select GROUP#,THREAD#,SEQUENCE#,BYTES,MEMBERS,ARCHIVED,
    STATUS,to_char(FIRST_TIME,'DD-MON-YYYY HH24:MI:SS') switch_time
    from v$log;
    Temporary tablespace usage:
    ============================
    SELECT b.tablespace,
    ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
    a.sid||','||a.serial# SID_SERIAL,
    a.username,
    a.program
    FROM sys.v_$session a,
    sys.v_$sort_usage b,
    sys.v_$parameter p
    WHERE p.name = 'db_block_size'
    AND a.saddr = b.session_addr
    ORDER BY b.tablespace, b.blocks;
    SELECT A2.TABLESPACE, A2.SEGFILE#, A2.SEGBLK#, A2.BLOCKS,
    A1.SID, A1.SERIAL#, A1.USERNAME, A1.OSUSER, A1.STATUS
    FROM V$SESSION A1,V$SORT_USAGE A2 WHERE A1.SADDR = A2.SESSION_ADDR;
    ========================================
    ALTER SYSTEM KILL SESSION 'SID,SERIAL#';
    Inactive sessions killing:
    SELECT 'ALTER SYSTEM KILL SESSION ' || '''' || SID || ',' ||
    serial# || '''' || ' immediate;' text
    FROM v$session
    WHERE status = 'INACTIVE'
    AND last_call_et > 86400
    AND username IN (SELECT username FROM DBA_USERS WHERE user_id>56);
    Procedure:
    CREATE OR REPLACE PROCEDURE Inactive_Session_Cleanup AS
    BEGIN
    FOR rec_session IN (SELECT 'ALTER SYSTEM KILL SESSION ' || '''' || SID || ',' ||
    serial# || '''' || ' immediate' text
    FROM v$session
    WHERE status = 'INACTIVE'
    AND last_call_et > 43200
    AND username IN (SELECT username FROM DBA_USERS WHERE user_id>60)) LOOP
    EXECUTE IMMEDIATE rec_session.text;
    END LOOP;
    END Inactive_Session_Cleanup;
    sequence using plsql
    =========================
    Declare
    v_next NUMBER;
    script varchar2(5000);
    BEGIN
    SELECT (MAX(et.dcs_code) + 1) INTO v_next FROM et_document_request et;
    script:= 'CREATE SEQUENCE et_document_request_seq
    MINVALUE 1 MAXVALUE 999999999999999999999999999 START WITH '||
         v_next || ' INCREMENT BY 1 CACHE 20';
    execute immediate script;
    end;
    ===========================
    Terminal wise session
    select TERMINAL,count(*) from v$session
    group by TERMINAL;
    total sessions
    select count(*) from v$session
    where TERMINAL not like '%UNKNOWN%'
    and TERMINAL is not null;
    HOW TO FIND DUPLICATE TOKEN NUMBERS
    ===========================================
    select count(distinct a.token_number) dup
    from tm_pen_bill a,tm_pen_bill b
    where a.token_number = b.token_number
    and a.bill_number <> b.bill_number
    and a.token_number is not null;
    when Block Corruption occurs:
    select * from DBA_EXTENTS
    WHERE file_id = '13' AND block_id BETWEEN '44157' and '50649';
    select BLOCK_ID,SEGMENT_NAME,BLOCKS from dba_extents where FILE_ID='14'
    and BLOCK_ID like '%171%';
    select BLOCK_ID,SEGMENT_NAME,BLOCKS from dba_extents where FILE_ID='14'
    and SEGMENT_NAME = 'TEMP_TD_PAY_ALLOTMENT_NMC';
    DBVERIFY:
    dbv blocksize=8192 file=users01.dbf log=dbv_users01.log
    ==============================================================
    DBMS_REPAIR:(Block Corruption)
    exec dbms_repair.admin_tables(table_name=>'REPAIR_TABLE',table_type=>dbms_repair.repair_table,action=>dbms_repair.create_action,tablespace=>'USERS');
    variable v_corrupt_count number;
    exec dbms_repair.check_object('scott','emp',corrupt_count=>:v_corrupt_count);
    print v_corrupt_count;
    ==============================================================
    Password:
    select login,substr(utl_raw.cast_to_varchar2(utl_raw.cast_to_varchar2(password)),1,30) password
    from mm_gen_user where active_flag = 'Y' and user_id=64 and LOGIN='GOPAL' ;
    CHARACTERSET
    select * from NLS_DATABASE_PARAMETERS;
    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;
    select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    ==========================================================
    EXPLAIN PLAN TABLE QUERY
    ========================
    EXPLAIN PLAN SET STATEMENT_ID='5'
    FOR
    "DML STATEMENT"
    PLAN TABLE QUERY
    ===============================
    set linesize 1000
    set arraysize 1000
    col OBJECT_TYPE for a20
    col OPTIMIZER for a20
    col object_name for a30
    col options for a25
    select COST,OPERATION,OPTIONS,OBJECT_TYPE,
    OBJECT_NAME,OPTIMIZER
    --,ID,PARENT_ID,POSITION,CARDINALITY
    from plan_table
    where statement_id='&statement_id';
    Rman settings: disk formats
    %t represents a timestamp
    %s represents the backup set number
    %p represents the piece number
    The dbms_workload_repository.create_snapshot procedure creates a manual snapshot in the AWR as seen in this example:
    EXEC dbms_workload_repository.create_snapshot;
    Calculation of a table the size of the space occupied by
    ========================================================
    select owner, table_name,
    NUM_ROWS,
    BLOCKS * AAA/1024/1024 "Size M",
    EMPTY_BLOCKS,
    LAST_ANALYZED
    from dba_tables
    where table_name = 'XXX';
    Finding statement/s which use lots of shared pool memory:
    ==========================================================
    SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions) "TotExecs"
    FROM v$sqlarea
    WHERE executions < 5
    GROUP BY substr(sql_text,1,40)
    HAVING count(*) > 30
    ORDER BY 2;
    See a table size table
    =========================================
    select sum (bytes) / (1024 * 1024) as "size (M)" from user_segments
    where segment_name = upper ('& table_name');
    See a index size table
    =========================================
    select sum (bytes) / (1024 * 1024) as "size (M)" from user_segments
    where segment_name = upper ('& index_name');
    monitoring table space I / O ratio
    ====================================
    select B.tablespace_name name, B.file_name "file", A.phyrds pyr,
    A.phyblkrd pbr, A.phywrts pyw, A.phyblkwrt pbw
    from v $ filestat A, dba_data_files B
    where A.file # = B.file_id
    order by B.tablespace_name;
    monitor the file system I / O ratio
    =====================================
    select substr (C.file #, 1,2) "#", substr (C.name, 1,30) "Name",
    C.status, C.bytes, D.phyrds, D.phywrts
    from v $ datafile C, v $ filestat D
    where C.file # = D.file #;
    the hit rate monitor SGA
    =========================
    select a.value + b.value "logical_reads", c.value "phys_reads",
    round (100 * ((a.value + b.value)-c.value) / (a.value + b.value)) "BUFFER HIT RATIO"
    from v $ sysstat a, v $ sysstat b, v $ sysstat c
    where a.statistic # = 38 and b.statistic # = 39
    and c.statistic # = 40;
    monitoring SGA in the dictionary buffer hit ratio
    ==================================================
    select parameter, gets, Getmisses, getmisses / (gets + getmisses) * 100 "miss ratio",
    (1 - (sum (getmisses) / (sum (gets) + sum (getmisses ))))* 100 "Hit ratio"
    from v $ rowcache
    where gets + getmisses <> 0
    group by parameter, gets, getmisses;
    monitoring SGA shared cache hit ratio should be less than 1%
    =============================================================
    select sum (pins) "Total Pins", sum (reloads) "Total Reloads",
    sum (reloads) / sum (pins) * 100 libcache
    from v $ librarycache;
    select sum (pinhits-reloads) / sum (pins) "hit radio", sum (reloads) / sum (pins) "reload percent"
    from v $ librarycache;
    monitoring SGA in the redo log buffer hit ratio should be less than 1%
    =========================================================================
    SELECT name, gets, misses, immediate_gets, immediate_misses,
    Decode (gets, 0,0, misses / gets * 100) ratio1,
    Decode (immediate_gets + immediate_misses, 0,0,
    immediate_misses / (immediate_gets + immediate_misses) * 100) ratio2
    FROM v $ latch WHERE name IN ('redo allocation', 'redo copy');
    control memory and hard disk sort ratio, it is best to make it smaller than .10, an increase sort_area_size
    =============================================================================================================
    SELECT name, value FROM v$sysstat WHERE name IN ('sorts (memory)', 'sorts (disk)');
    monitoring what the current database who are running SQL statements?
    ===================================================================
    SELECT osuser, username, sql_text from v $ session a, v $ sqltext b
    where a.sql_address = b.address order by address, piece;
    monitoring the dictionary buffer?
    =====================================
    SELECT (SUM (PINS - RELOADS)) / SUM (PINS) "LIB CACHE" FROM V $ LIBRARYCACHE;
    SELECT (SUM (GETS - GETMISSES - USAGE - FIXED)) / SUM (GETS) "ROW CACHE" FROM V $ ROWCACHE;
    SELECT SUM (PINS) "EXECUTIONS", SUM (RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V $ LIBRARYCACHE;
    The latter divided by the former, this ratio is less than 1%, close to 0% as well.
    SELECT SUM (GETS) "DICTIONARY GETS", SUM (GETMISSES) "DICTIONARY CACHE GET MISSES"
    FROM V $ ROWCACHE
    see the table a high degree of fragmentation?
    =================================================
    SELECT owner,segment_name table_name, COUNT (*) extents
    FROM dba_segments WHERE owner NOT IN ('SYS', 'SYSTEM') GROUP BY owner,segment_name
    HAVING COUNT (*) = (SELECT MAX (COUNT (*)) FROM dba_segments GROUP BY segment_name);
    =======================================================================
    Fragmentation:
    =================
    select table_name,round((blocks*8),2)||'kb' "size"
    from user_tables
    where table_name = 'BIG1';
    Actual Data:
    =============
    select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
    from user_tables
    where table_name = 'BIG1';
    The establishment of an example data dictionary view to 8I
    =======================================================
    $ ORACLE_HOME / RDBMS / ADMIN / CATALOG.SQL
    The establishment of audit data dictionary view with an example to 8I
    ======================================================
    $ ORACLE_HOME / RDBMS / ADMIN / CATAUDIT.SQL
    To establish a snapshot view using the data dictionary to 8I Case
    =====================================================
    $ ORACLE_HOME / RDBMS / ADMIN / CATSNAP.SQL
    The table / index moving table space
    =======================================
    ALTER TABLE TABLE_NAME MOVE TABLESPACE_NAME;
    ALTER INDEX INDEX_NAME REBUILD TABLESPACE TABLESPACE_NAME;
    How can I know the system's current SCN number?
    =================================================
    select max (ktuxescnw * power (2, 32) + ktuxescnb) from x$ktuxe;
    Will keep a small table into the pool
    ======================================
    alter table xxx storage (buffer_pool keep);
    Check the permissions for each user
    ===================================
    SELECT * FROM DBA_SYS_PRIVS;
    =====================================================================
    Tablespace auto extend check:
    =================================
    col file_name for a50
    select FILE_NAME,TABLESPACE_NAME,AUTOEXTENSIBLE from dba_data_files
    order by TABLESPACE_NAME;
    COL SEGMENT_NAME FOR A30
    select SEGMENT_NAME,TABLESPACE_NAME,BYTES,EXTENTS,INITIAL_EXTENT,
    NEXT_EXTENT,MAX_EXTENTS,PCT_INCREASE
    from user_segments
    where segment_name in ('TD_PAY_CHEQUE_PREPARED','TM_PAY_BILL','TD_PAY_PAYORDER');
    select TABLESPACE_NAME,INITIAL_EXTENT,NEXT_EXTENT,MAX_EXTENTS,PCT_INCREASE
    from dba_tablespaces;
    alter tablespace temp default storage(next 5m maxextents 20480 pctincrease 0);
    ALTER TABLE TD_PAY_CHEQUE_PREPARED
    default STORAGE ( NEXT 10 M maxextents 20480 pctincrease 0);
    Moving table from one tablespace to another
    ===============================================
    alter table KHAJANE.TEMP_TM_PAY_ALLOTMENT_NMC move tablespace khajane_ts;
    ==============================================
    for moving datafiles location:
    ========================================
    alter database rename file a to b;
    ======================================================================
    for logfile Clearence:
    select * from global_name;
    col member for a50
    set linesize 132
    set trimspool on
    select 'alter database clear logfile ' || '''' || member || '''' || ';'
    from v$logfile where status ='STALE';
    logfile switch over:
    select GROUP#,THREAD#,SEQUENCE#,BYTES,MEMBERS,ARCHIVED,
    STATUS,to_char(FIRST_TIME,'DD-MON-YYYY HH24:MI:SS') switch_time
    from v$log;

    Answered

  • Mail server is too slow to deliver the mail to internal domain

    Hi,
    My mail server faster enough to send the mails to other domains, but when i try to send mail to my own domain it too slow some time it take 30 t0 40 minutes to deliver the mail.
    Please help
    Thanks,
    Gulab Pasha

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • Server is too slow

    I have oracle 9i database on sun solaris OS
    IBM server.
    2GB RAM
    4 hard disk
    45 users are login in to the system.
    Its almost dead slow and some time user get out from the system.It is almost in hold sitution.
    When ever we restart the server all the process clear and works properly.After some time its started slow and performance got detoriate.
    gradualy its gets slow by the time and one point time it is dead slow and takes 30 min to commit transaction.which takes 20 sec at normal time.When i restart server it works normaly but gradually it gets slow again.
    please write some procedure to checkup
    database configuration is standard

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • Regarding force archiving

    Dear all,
    When I need to take the backup of latest archive log file, which commad from the below two is better
    1. ALTER SYSTEM ARCHIVE LOG CURRENT;
    2. ALTER SYSTEM SWITCH LOGFILE;
    Does both the commands give the same result ?
    Regards,
    Charan

    Hi Charan;
    alter system switch logfile -- > switches to the next logfile, irrespective of what mode the database is in ARCHIVELOG or NOARCHIVELOG mode.If in archivelog mode alter system swicth logfile will generate the archive of the redolog that was switched.
    alter system archive log current --> Here oracle switches the current log and archives it as well as all other unarchived logs.It can be fired on only the database which is in ARCHIVELOG mode.
    [http://download.oracle.com/docs/cd/B10501_01/server.920/a96519/backup.htm]
    Source:
    Diff between  switch logfile and archive log current
    Regard
    Helios

  • Issue with setting up standby server::::Urgent

    Hello,
    I am setting up a standby database to an existing database (running in NOARCHIVELOG mode). I have made it ARCHIVELOG mode today.
    I have followed the below stated steps to create the standby...
    1. Edited primary and standby init.ora files
    2. Created TNS services and listed both of them in each of the server, and verified that each one of them are accessible from the other.
    3. Both the databases use the same SID, and db_names. But they differ in DB_UNIQUE_NAMEs
    4. Created Standby redo log (SRL) on the primary.
    5. Copied all the files from primary to a standby location (say /u02/app/oradata/standby/)
    6. The directory structures on these servers differ, so made two entries in both init.ora files (DB_FILE_NAME_CONVERT, and LOG_FILE_NAME_CONVERT).
    Values for DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT are exactly same in both the init files ('/primary/structure', '/secondary/structure',...)
    7. On primary machine as sysdba:
    SQL> startup mount;
    SQL> alter database archivelog;
    SQL> alter database open;
    8. On standby machine as sysdba:
    SQL> startup mount pfile='/u02/app/oracle/dbs/initDBASE.ora'
    SQL> alter system recover managed standby database disconnect from session
    The problem:
    Archive logs are not being brought onto the standby! Even though when i make a logfile switch on the primary.
    This is very urgent...
    Any help would be greatly appreciated.
    Thanks,
    Aswin.

    Hi navneet,
    I am posting the log_archive_dest-1 & 2 here... Please go thru this...
    LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/flash_recovery_area/DLNX/archivelog/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=DLNX'
    LOG_ARCHIVE_DEST_2='SERVICE=DLNX2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=DLNX2'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    DLNX is the db_unique_name for primary, and tnsservice name in the secondary for the primary db.
    DLNX2 is the db_unique_name for secondary, and tnsservice name in the primary for the secondary db
    Each of them can be accessed from the other thru sqlplus sys/pwd@<db> as sysdba.
    Any corrections in this ???

  • (V9I) ORACLE9I NEW FEATURE : LOGMINER 의 기능과 사용방법

    제품 : ORACLE SERVER
    작성날짜 : 2002-11-13
    (V9I) ORACLE9I NEW FEATURE : LOGMINER 의 기능과 사용방법
    ==========================================
    Purpose
    Oracle 9i LogMiner 의 새로운 기능과 사용방법에 대해 알아보도록 한다.
    (8I LogMiner의 일반적인 개념과 기능은 BUL. 12033 참조)
    Explanation
    LogMiner 는 8I 에서부터 새롭게 제공하는 기능으로 Oracle 8 이상의 Redo
    log file 또는 Archive log file 분석을 위해 이용된다.
    9I 에서는 기존 8I 에서 제공하던 기능을 모두 포함하고 그 외 새로운 점은
    LogMiner 분석을 위해 dictionary 정보를 Flat file(output)로 생성이 가능하
    였으나 9I 에서부터는 Flat file 과 함께 On line redo log 를 이용하여
    dictionary 정보를 저장할 수 있게 되었고, Block corruption이 발생하였을
    경우 해당 부분만 skip할 수 있는 기능이 추가되었다.
    9I 에서의 New feature 를 정리하면 다음과 같다.
    1. 9I New feature
    1) DDL 지원 단, 9I 이상의 Redo/Archive log file만 분석 가능
         : V$LOGMNR_CONTENTS 의 OPERATION column에서 DDL 확인
    2) LogMiner 분석을 위해 생성한 dictioanry 정보를 online redo 에 저장 가능
         : 반드시 Archive log mode 로 운영 중이어야 한다.
         : DBMS_LOGMNR_D.BUILD를 사용하여 dictionary file 생성
         : 기존 Flat file 또는 Redo log 에 생성 가능
         : 예) Flat file
              - SQL> EXECUTE dbms_logmnr_d.build
                   (DICTIONARY_FILENAME => 'dictionary.ora'
                   ,DICTIONARY_LOCATION => '/oracle/database'
                   ,OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
         예) Redo log
              - SQL> EXECUTE dbms_logmnr_d.build
                   (OPTIONS => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
    3) Redo log block corruption 이 발생하였을 경우 corruption 된 부분을 skip하고 분석
         : 8I 에서 log corruption 발생 시 LogMiner 가 종료되고 분석 위해 다시 시도
         : 9I 에서는 DBMS_LOGMNR.START_LOGMNR 의 SKIP_CORRUPTION option 으로 skip 가능
    4) Commit 된 transaction 에 대해서만 display
         : DBMS_LOGMNR.START_LOGMNR 의 COMMITTED_DATA_ONLY option
    5) Index clustered 와 연관된 DML 지원 (8I 제공 안 됨)
    6) Chained and Migrated rows 분석
    2. 제약 사항(9I LogMiner 에서 지원하지 않는 사항)
    1) LONG and LOB data type
    2) Object types
    3) Nested tables
    4) Object Refs
    5) IOT(Index-Organized Table)
    3. LogMiner Views
    1) V$LOGMNR_CONTENTS - 현재 분석되고 있는 redo log file의 내용
    2) V$LOGMNR_DICTIONARY - 사용 중인 dictionary file
    3) V$LOGMNR_LOGS - 분석되고 있는 redo log file
    4) V$LOGMNR_PARAMETERS - LogMiner에 Setting된 현재의 parameter의 값
    4. LogMiner 를 이용하기 위한 Setup
    1) LogMiner 를 위한 dictionary 생성(flatfile or on line redo log)
    2) Archive log file or Redo log file 등록
    3) Redo log 분석 시작
    4) Redo log 내용 조회
    5) LogMiner 종료
    5. LogMiner Example
    1) flatfile이 생성될 location을 확인
    SQL> show parameter utl
    NAME TYPE VALUE
    utl_file_dir string /home/ora920/product/9.2.0/smlee
    2) dictionary 정보를 저장할 flatfile 정의 -> dictionary.ora 로 지정
    SQL> execute dbms_logmnr_d.build -
    (dictionary_filename => 'dictionary.ora', -
    dictionary_location => '/home/ora920/product/9.2.0/smlee', -
    options => dbms_logmnr_d.store_in_flat_file);PL/SQL procedure successfully completed.
    3) logfile을 switch 하고 current logfile name과 current time을 기억한다.
    SQL> alter system switch logfile;
    System altered.
    SQL> select member from v$logfile, v$log
    2 where v$logfile.group# = v$log.group#
    3 and v$log.status='CURRENT';
    MEMBER
    /home/ora920/oradata/ORA920/redo02.log
    SQL> select current_timestamp from dual;
    CURRENT_TIMESTAMP
    13-NOV-02 10.37.14.887671 AM +09:00
    4) test를 위해 table emp30 을 생성하고 update -> drop 수행
    SQL> create table emp30 as
    2 select employee_id, last_name, salary from hr.employees
    3 where department_id=30;
    Table created.
    SQL> alter table emp30 add (new_salary number(8,2));
    Table altered.
    SQL> update emp30 set new_salary = salary * 1.5;
    6 rows updated.
    SQL> rollback;
    Rollback complete.
    SQL> update emp30 set new_salary = salary * 1.2;
    6 rows updated.
    SQL> commit;
    Commit complete.
    SQL> drop table emp30;
    select
    Table dropped.
    SQL> select current_timestamp from dual;
    CURRENT_TIMESTAMP
    13-NOV-02 10.39.20.390685 AM +09:00
    5) logminer start (다른 session을 열어 작업)
    SQL> connect /as sysdba
    Connected.
    SQL> execute dbms_logmnr.add_logfile ( -
    logfilename => -
    '/home/ora920/oradata/ORA920/redo02.log', -
    options => dbms_logmnr.new)PL/SQL procedure successfully completed.
    SQL> execute dbms_logmnr.start_logmnr( -
    dictfilename => '/home/ora920/product/9.2.0/smlee/dictionary.ora', -
    starttime => to_date('13-NOV-02 10:37:44','DD_MON_RR HH24:MI:SS'), -
    endtime => to_date('13-NOV-02 10:39:20','DD_MON_RR HH24:MI:SS'), -
    options => dbms_logmnr.ddl_dict_tracking + dbms_logmnr.committed_data_only)PL/SQL procedure successfully completed.
    6) v$logmnr_contents view를 조회
    SQL> select timestamp, username, operation, sql_redo
    2 from v$logmnr_contents
    3 where username='HR'
    4 and (seg_name = 'EMP30' or seg_name is null);
    TIMESTAMP                USERNAME           OPERATION           SQL_REDO           
    13-NOV-02 10:38:20          HR               START               set transaction read write;
    13-NOV-02 10:38:20          HR               DDL               CREATE TABLE emp30 AS
                                                      SELECT EMPLOYEE_ID, LAST_NAME,
                                                      SALARY FROM HR.EMPLOYEES
                                                      WHERE DEPARTMENT_ID=30;
    13-NOV-02 10:38:20          HR                COMMIT
    commit;
    13-NOV-02 10:38:50          HR               DDL               ALTER TABLE emp30 ADD
                                                      (new_salary NUMBER(8,2));
    13-NOV-02 10:39:02          HR               UPDATE          UPDATE "HR"."EMP30" set
                                                      "NEW_SALARY" = '16500' WHERE
                                                      "NEW_SALARY" IS NULL AND ROWID
                                                      ='AAABnFAAEAALkUAAA';
    13-NOV-02 10:39:02-10     HR               DDL               DROP TABLE emp30;
    7) logminer 를 종료한다.
    SQL> execute dbms_logmnr.end_logmnr
    PL/SQL procedure successfully completed.
    Reference Documents
    Note. 148616.1

Maybe you are looking for

  • Insert data from a table into 5 different tables

    My application has a data block than display information from another table. From this data block, the user has the option to select the records for upload. The information in the data block is used to insert data into 5 different tables. So if any e

  • Quick help Exception thrown

    I found this example from an old forum, and according to that, this code is suppose to work, but i'm getting an exception. i didn't want to write a comment on that forum, as the moderators will block it any way. So here's the   and the exception i go

  • Accessing  report variable

    Hi, i am creating one function module, i need to return one internal table IT_OUTPUT, that is ok,  the total logic for filling of IT_OUTPUT is written in one report, this ALV report which is used internal table IT_OUTPUT for grid display, Now my issu

  • Apple tv and a local wireless network

    i'm trying to set up my apple tv and my macbook pro, but i have no internet connection where i'm at currently. i assumed you could simply set up a local wireless network, so i hooked up my netgear router and my macbook recognizes and well...the apple

  • RAW files seem to be incompatible with my Mac - why?

    I have a 2011 Macbook Pro which works fine with RAW files imported from my camera. Why is it my new iMac can't seem to handle RAW files.  I have downloaded the latest software,  but still says they are incompatible.   Suggestions anyone?