Backup into file system

Hi
Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
Thanks
sunder
<distributed-scheme>
     <scheme-name>distributed-Customer</scheme-name>
     <service-name>DistributedCache</service-name>
     <!-- <thread-count>5</thread-count> -->
     <backup-count>1</backup-count>
     <backup-storage>
     <type>file-mapped</type>
     <directory>/data/xx/backupstorage</directory>
     <initial-size>1KB</initial-size>
     <maximum-size>1KB</maximum-size>
     </backup-storage>
     <backing-map-scheme>
          <read-write-backing-map-scheme>
               <scheme-name>DBCacheLoaderScheme</scheme-name>
               <internal-cache-scheme>
               <local-scheme>
                    <scheme-ref>blaze-binary-backing-map</scheme-ref>
               </local-scheme>
               </internal-cache-scheme>
               <cachestore-scheme>
                    <class-scheme>
                         <class-name>com.xxloader.DataBeanInitialLoadImpl
                         </class-name>
                         <init-params>
                              <init-param>
                                   <param-type>java.lang.String</param-type>
                                   <param-value>{cache-name}</param-value>
                              </init-param>
                              <init-param>
                                   <param-type>java.lang.String</param-type>
                                   <param-value>com.xx.CustomerProduct
                                   </param-value>
                              </init-param>
                              <init-param>
                                   <param-type>java.lang.String</param-type>
                                   <param-value>CUSTOMER</param-value>
                              </init-param>
                         </init-params>
                    </class-scheme>
               </cachestore-scheme>
               <read-only>true</read-only>
          </read-write-backing-map-scheme>
     </backing-map-scheme>
     <autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>blaze-binary-backing-map</scheme-name>
<high-units>{back-size-limit 1}</high-units>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>{back-expiry 0}</expiry-delay>
<cachestore-scheme></cachestore-scheme>
</local-scheme>

Hi
We did try out with the following configuration
<near-scheme>
     <scheme-name>blaze-near-HeaderData</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{front-size-limit 0}</high-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>{back-expiry 1h}</expiry-delay>
<flush-delay>1m</flush-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>blaze-distributed-HeaderData</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
<distributed-scheme>
<scheme-name>blaze-distributed-HeaderData</scheme-name>
<service-name>DistributedCache</service-name>
<partition-count>200</partition-count>
<backing-map-scheme>
<partitioned>true</partitioned>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<high-units>20</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1073741824</unit-factor>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</nio-memory-manager>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.xx.loader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.bean.HeaderData</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>SDR.TABLE_NAME_XYZ</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<backup-count>1</backup-count>
<backup-storage>
<type>off-heap</type>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</backup-storage>
<autostart>true</autostart>
</distributed-scheme>
With configuration the amount of residual main memory consumption is like 15 G.
When we changed this configuration to
<near-scheme>
     <scheme-name>blaze-near-HeaderData</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{front-size-limit 0}</high-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>{back-expiry 1h}</expiry-delay>
<flush-delay>1m</flush-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>blaze-distributed-HeaderData</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
<distributed-scheme>
<scheme-name>blaze-distributed-HeaderData</scheme-name>
<service-name>DistributedCache</service-name>
<partition-count>200</partition-count>
<backing-map-scheme>
<partitioned>true</partitioned>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<high-units>20</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1073741824</unit-factor>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</nio-memory-manager>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.xx.loader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.bean.HeaderData</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>SDR.TABLE_NAME_XYZ</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<backup-count>1</backup-count>
<backup-storage>
<type>file-mapped</type>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
<directory>/data/xxcache/blazeload/backupstorage</directory>
<file-name>{cache-name}.store</file-name>
</backup-storage>
<autostart>true</autostart>
</distributed-scheme>
Note backup storage is file-mapped
<backup-storage>
<type>file-mapped</type>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
<directory>/data/xxcache/blazeload/backupstorage</directory>
<file-name>{cache-name}.store</file-name>
</backup-storage>
We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
Appreciate any pointers in this regard.
Thanks
sunder

Similar Messages

  • How to copy NCLOB value(Contains Word Document) into file system

    How to copy NCLOB value(Contains Word Document) into file system or display in sqlplus

    The UTL_FILE package will write it only to text file not(NCLOB Value[containts images as well as text])

  • How to upload a file into file system on server?

    We want to allow our partners to upload .csv file on our unix box. How do we upload a file into file system. I might be wrong but after reading the documentation, I think OA framework only allows to save the file in database. Please advise.
    Thanks,

    Hi,
    I was making a very silly mistake and found it. We need to create a new Entity object before you try to fill it with data. So what we can do is in the Controler, call a method in the AM that creates a new entity object (this should be made in processRequest, so that the VO gets initialized when the page loads).
    Now you can fill the entity with data in the page (ie browse and choose the file), and then when the page is submitted call a method AM again but now in processFormRequest to do the commit.
    code example
    In the AM class
    * Creates a new createCVRecord. To be Called from ProcessRequest when the page loads.
    public void createCVRecord()
    XxPersonCVVOImpl vo = getXxPersonCVVO1();
    if (!vo.isPreparedForExecution())
    vo.executeQuery();
    Row row = vo.createRow();
    vo.insertRow(row);
    // Required per OA Framework Model Coding Standard M69
    row.setNewRowState(Row.STATUS_INITIALIZED);
    } // end createCVRecord()
    * The commit method. To be Called from ProcessFormRequest when the page is submitted.
    public void apply()
    getTransaction().commit();
    or else there is a different way... please refer the following thread for more details....
    File Upload
    thanks to martin, who pointed out the mistake.

  • Backup to file system and sbt_tape at the same time?

    hello!
    is it possible to do a rman backup to disk and sbt_tape at the same time (just one backup, not two)? the reason is, that i want to copy the rman backup files from the local file system via robocopy to another server (in a different building), so that i can use them for fast restores in the case of a crash.
    if not, what is the recommended strategy in this case? backups should be available on tape AND the file system of a different machine.
    environment: oracle 10g, windows server 2008, commvault for backup on tape
    thanks for your advice.
    best regards,
    christian

    If you manually copy backupsets out of the FRA to another location, but still accessible as "local disks", you can use the CATALOG command in RMAN to "catalog" the copies.
    Thus, if you opy or move files from the FRA to /mybackups/MYDB, you would use
    "CATALOG START WITH /mybackups/MYDB" or individually "CATALOG BACKUPPIECE /mybackups/MYDB/backuppiece_1" etc
    Once you have moved or removed the backupsets out of the FRA, you must use
    "CROSSCHECK BACKUP"
    and
    "DELETE EXPIRED BACKUP"
    to update the RMAN repository. Else, Oracle will continue to "account" for disks pace consumed by the BackupPieces in V$FLASH_RECOVERY_AREA_USAGE and will soon "run out of space".
    You would do the same for ArchiveLogs (CROSSCHECK ARCHIVELOG ALL; DELETE EXPIRED ARCHIVELOG ALL) if your ArchiveLogs go to the FRA as USE_DB_RECOVERY_FILE_DEST

  • How to save a file into file system from database?

    Hi!
    I am using AppEx 2.2.1.I have created a report showing the filename and id of a table. The table has a blob column containing the file.I want to hyperlink filename, so that I can go ahead and save it into my file system. But I don't know what to give in the URL field of the filename column attributes. How to get the file id of a file already in the database to link it and use? Any ideas please.
    Regards,
    Deepa.

    This is a reference for the file download:
    http://download-uk.oracle.com/docs/cd/B31036_01/doc/appdev.22/b28839/up_dn_files.htm#CJAHDJDA
    I have a demo page showing this as well:
    http://htmldb.oracle.com/pls/otn/f?p=31517:15
    Denes Kubicek

  • After Restoring/Backup of File System XI Java Instances are not up!

    Hello all,
    We are facing problem in restoring the SAP XI System, after taking backup of the system the <b>java instances</b> in SAP XI System are not starting again. ABAP connections are fine.
    Can anyone provide suggestions/solutions in order to restore the XI System back.
    The system information is as follows.
    System Component:     SAP NetWeaver 2004s, <b>PI 7.0</b>
    Operating System:     SunOS 5.9, SunOS 5.10
    Database:          ORACLE 9.2.0.
    Regards,
    Ketan Patel

    If it´s REALLY a PI 7.0 (SAP_BASIS 700 and WebAS Java 7.00) then it´s not compatible. WebAS 7.00 needs Oracle 10g (http://service.sap.com/pam)
    Also see
    http://service.sap.com/nw2004s
    --> Availibility
    --> SAP NetWeaver 7.0 (2004s) PAM
    If you open the Powerpoint, you will see that Oracle 9 is not listed. I wonder, how you got that installed.
    Neverless, if you recover a Java instance, both filesystem and database content (of the Java schema) must be in sync, means, you need to restore both, database (schema) and filesystem, that have been backed up at the same time.
    Check Java Backup and Restore :
    Restoring the System
           1.      Shut down the system.
           2.      Install a new AS Java system using SAPInst, or restore the file system from the offline backups that you created.
           3.      Import the database backup using the relevant tools provided by the database vendor.
           4.      Overwrite the SAP system directory /usr/sap/.
           5.      Start the system (see Starting and Stopping SAP NetWeaver ABAP and Java.)
    The J2EE Engine is restored with the last backup.
    Markus

  • ASM RMAN backup to File System

    Hi all,
    I have a rman backup (datafile and controlfile) which was took in an ASM instance (not a RAC) ORACLE 11.2.0.2 in a Linux server, now I want restore the backup in a new database in windows/Linux OS using general File System storage (single instance rdbms) instead of ASM.
    Is this possible?
    Can I restrore an ASM rman backup in a file system storage mechanisim in a new server?
    Kindly clarify my question.
    Thanks in Advance..
    Nonuday

    Nonuday wrote:
    Hi Levi,
    Thanks for your invaluable script and blog.
    can you clarify me on this query:
    I have a RMAN backup taken from ASM and the backup is database and controlf file backup which contains datafiles and controlfiles.
    Now I need to restore this on my system and here I dont use ASM or archive log, I use single instance in no archive log mode database.
    I have restored the control file from the RMAN controfile backup.
    Before restoring the control file I have checked the orginal pfile of the backup database which had parameters like
    'db_create_file_dest',
    'db_create_online_log_dest',
    'db_recovery_file_dest_size',
    'db_recovery_dest',
    'log_archive_dest'.
    Since I am not gng to create a DB in no archive log mode, I didnt use any of the above parameters and created a database.
    Now my question is:
    If i restore the database and the datafile will get restored and after renaming all the logfiles, database will be opened.
    I want to know whether this method is correct or wrong and will the database work as it was working previously. Or do i need create the db_file_recovery and other parameters also for this database.About Parameter:
    All these parameters should reflect your current environment any reference to the old environment must be modified.
    About Filesystem used:
    Does not matter what Filesystem you are using the File (datafile/redolog/controlfile/archivelog/backuppiece) are created on Binary Format which depend on Platform only. So, The same binary file ( e.g datafile) have same format and content on raw device, ASM, ext3, ext2, and so on. So, to database it's only a location where file are stored, but the file are the same. ASM has a different architecture from Regular Filesystem and need be managed in a different manner (i.e using RMAN).
    About Database:
    Since your database files are the same even using different filesystem what you need is rename your datafiles/redofiles on controlfile during restore, the redo files will be recreated.
    So, does not matter if you database are noarchivelog or archivelog, the same way which you will do a restore on ASM is the same way to restore on Regular Filesystem. (it's only about renaming database file on controlfile during restore)
    On blog the post "How Migrate All Files on ASM to Non-ASM (Unix/Linux)" is about move the file from filesystem to another. But you can modify the script used to restore purposes;
    ## set newname tell to RMAN where file will be restored and keep this files location on memory buffer
    RMAN> set newname for datafile 1 to <location>;
    ### swich get list of files from memory buffer (rman) and rename on controlfile the files already restored.
    RMAN>switch datafile/tempfile all ;With database mounted use this script below:
    I just commented three lines that are unnecessary in your case.
    SET serveroutput ON;
    DECLARE
      vcount  NUMBER:=0;
      vfname VARCHAR2(1024);
      CURSOR df
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/datafile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$datafile;
      CURSOR tp
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/tempfile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$tempfile;
    BEGIN
    --  dbms_output.put_line('CONFIGURE CONTROLFILE AUTOBACKUP ON;'); ### commented
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
      --  dbms_output.put_line('backup as copy datafile ' || dfrec.file# ||' format  "'||dfrec.name ||vcount||'.dbf";');  ### commented
      END LOOP;
      dbms_output.put_line('run');
      dbms_output.put_line('{');
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
        dbms_output.put_line('set newname for datafile ' || dfrec.file# ||'  to  '''||dfrec.name ||vcount||'.dbf'' ;');
      END LOOP;
      FOR tprec IN tp
      LOOP
        IF tprec.name  !=  vfname THEN
          vcount      :=1;
          vfname     := tprec.name;
        ELSE
          vcount := vcount+1;
          vfname:= tprec.name;
        END IF;
        dbms_output.put_line('set newname for tempfile ' || tprec.file# ||'  to  '''||tprec.name ||vcount||'.dbf'' ;');
        END LOOP;
          dbms_output.put_line('restore database;');
        dbms_output.put_line('switch tempfile all;');
        dbms_output.put_line('switch datafile all;');
        dbms_output.put_line('recover database;');
        dbms_output.put_line('}');
    ---   dbms_output.put_line('alter database open;');  ### comented because you need rename your redologs on controlfile before open database
        dbms_output.put_line('exit');
    END;
    /After restore you must rename your redologs on controlfile from old location to new location:
    e.g
    ##  use this query to get current location of redolog
    SQL>  select group#,member from v$logfile order by 1;
    ## and change from <old_location> to <new_location>
    SQL > ALTER DATABASE
      RENAME FILE '+DG_TSM_DATA/tsm/onlinelog/group_3.263.720532229' 
               TO  '/u01/app/oracle/oradata/logs/log3a.rdo'  When you change all redolog on controlfile issue command below:
    SQL> alter database open resetlogs;PS: Always track database in real time using alert log file of database.
    HTH,
    Levi Pereira

  • Backup root file system on DVD

    I have a root file system about 4Gb large.
    Can I back it up on a DVD?
    I read the ufsdump works with tapes.
    Can it work with a DVD?
    If yes, what is the command format?

    I have a root file system about 4Gb large.
    Can I back it up on a DVD?
    I read the ufsdump works with tapes.
    Can it work with a DVD?
    If yes, what is the command format?

  • Aperture backup strategi - file system compatibility

    Hi all,
    I am beginning to run out of space for a managed library so I'm starting to look for alternatives. Currently I have a managed library, with a vault on another disk on the same computer. This vault is then mirrored to another computer (Linux) via rsync. This seems to work fine.
    The solution I'm thinking about it to skip the vault and mirror the library + referenced files directly, while Aperture is not running of course.
    The question that arises is whether Aperture does/needs anything "nonstandard" in it's files - resource forks for example. The file names inside the library are a bit weird but within specs for a Unix filesystem so that isn't as problem, but I would hate to have to recover a library and then discover that all the files are "disconnected" because of some minor change that occurred during backup/restore...
    Anyone with experience of this (mirroring libraries to a non-Apple filesystem and restoring it)?
    PowerMac G5 Dual 2.0 GHz, 2GB RAM Mac OS X (10.4.8) ATI Radeon 9650

    OK,
    I actually tried this myself instead. I created a new Library, imported a project into it to get some photos with metadata and versions, etc. The masters were relocated to outside the library. I then copied the library to and from a Linux server with rsync, and moved the masters to the file server as well.
    After opening Aperture again and reconnecting the masters, all seemed well.

  • Raw Device Backup to file system(OPS 8i)

    Hi
    Our currently setup is
    Oracle database 8.1.6 (Oracle Parallel Server) Two Node
    Noarchive Mode
    Solaris 2.6
    all database file ,redo logfiles,controlfiles under raw device.
    database size 16 G.B
    oracle block size 8192
    currently we are using only export backup of oracle.
    But now i want to take cold backup of oracle database to disk.
    cold backup Raw --> Disk
    How we can take cold backup with dd command and skip parameter ?
    Is anybody have practical idea of dd command with skip parameter.
    Thanks and regards
    Kuljeet pal singh

    you can use ufsdump instead of dd

  • Do we need to backup OS (file systems) on Exadata storage cells?

    We got confusing messages about if we need to or not. I'd like to hear your opinons.
    Thanks!

    Hi,
    The answer is no.
    There is no need to backup the OS of the storage cell.
    Worst case a complete storage cell needs to be replaced. A field engineer will open your broken storage cell and take out the onboard USB and put it inside the new storage cell.
    The system will boot from this USB and you can choose to implement the 'old' configuration on the new cell.
    Regards,
    Tycho

  • Using old file system backup for Cloning

    I have taken an off-line backup of Oracle 11i (11.5.10.2) 15 days ago. Before taking backup of file system , I verified that all the latest Rapid Clone Patches are applied. No changes or patch work in APPL_TOP or DB has been done since that backup. Now I need to do cloning of this instance, how I can use this backup for Cloning.
    Rapid Clone scripts create and generate some files/directories so I am not sure whether my Old backup of file system will work or not. What is the best way to use old backup for cloning , also what are the file and directories in addition to the old backup of file system which I need to copy to Target System.
    Thanks for reviewing and suggestions.
    Samar

    Samar,
    If you have run preclone before backing it up, your backup should be valid for cloning.
    2.1 in the cloning doc has to be in the backup.
    These doc's should clear out yours doubts on cloning.
    Cloning Oracle Applications Release 11i with Rapid Clone
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1
    FAQ: Cloning Oracle Applications Release 11i
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216664.1

  • RMAN to Disk (shared mounted file system) then secure backup to Tape

    Hi
    This is a little away from the norm and I am new to Oracle secure Backup.
    We have several databases on physically separate servers, all backup to a central disk (a ZFS shared file-system).
    We have a media server also with the same mounted file-system so it can see all of the RMAN backups there.
    Secure backup is installed on the media server and is configured as such.
    The question I have is I need to backup the file system to tape where all the RMAN backups live, I have configured the data set but I get file permission errors for each of the RMAN backup files in the directory.
    I have tried to change these permissions but to no avail (assuming it is just a read write access change that is needed - but this may be a problem in the long run). What is the general process for backup of already created RMAN backups sat in a shared area? I know its not the norm to do to disk then to tape backups but can this be done? I would have installed Secure backup client on each server and managed the whole backup through secure backup but this is not possible I must do the to tape from the file system, any advise and guidance on this would be much appreciated.
    Kind regards
    Vicky
    Edited by: user10090654 on Oct 4, 2011 4:50 AM

    You can easily accomplish a very streamlined D2D2T strategy! RMAN backup to disk...then backup that disk to tape via RMAN and OSB. Upon restore, RMAN will restore from best media disk or tape based on where files are located.
    Donna

  • Back up - file system

    This question is on backup.
    Every time i take backup of file system for app folder(\\Hyperion\AnalyticServices) from my essbase server.
    Is this the right way backing up of file system.if any of database corroupts, Can i just rplace the app folder with the backup app folder , does this work.

    This is one way to do the backup. If an application corrupted, you would just have to restore the application directory under the app folder. That is unless you have stored files on another drive. If you have gone into the storage tab of properties and changed where the page and index files reside, you would also have to back up that drive (same directory starting point) at the same time as your other drive backup.

  • Using ufsrestore on multiple file systems

    I have a ufsdump script that allows me to backup the file systems on c0t0d0 all at once. Now I am looking for a way to restore all of these file systems at one time using a script. I would greatly appreciate any help.
    Cletus

    Is this the boot disk ?

Maybe you are looking for

  • Macmini 2010 sound problem over hdmi

    Hi, my name is Diego im from Argentina, well i have a problem with my macmini 2010 unibody, i have set up the mini over hdmi in to my receiver ( yamaha 663 ) and hdmi out to my LCDTV, i set up in sound properties HDMI output ok all is fine but if i a

  • Problem in producing arabic content

    hi guys i want to make a JSP page containing arabic and english contents (static contents) i tried : <%@ page contentType="text/html;charset=UTF-8"%> and <meta http-equiv="Content-Type" content="text/html; charset=Arabic(Windows)"> and i find: instea

  • Beginner question! Calling servlet from a browser

    Hello, Does anyone know how to set up the JSDK server to run servlets class files from a different directory to the default? (the default being, I think, webpages/web-inf/servlets) Can't find info in the documentation that I can understand ... is it

  • Datawarehouse LifeCycle?

    Hi All, just i want to confirm my understanding in Datawarehousing? 1-Datawarehouse system will have a group of entities , each entity is a star schema and denormalized we use discoverer to build reports from those schemas 2-discoverer can be used al

  • DAC tasks failing cuz Parameter files are missing

    We are unable to find any of the parameter file in our installation. Although all the source files and lookup files have been copied to the informatica Src and lookup directories from dac's dirctory. I nenver had any issue with these source, lookup a