File system transactions
Is there a jdbc driver or a library of some sort that allows me to enlist file system operations in the same transaction with database operations? I have a weblogic 8.1 ejb application that generates some files to be ftped by clients, and I want the file generation to be part of the same database transaction, so if something fails in the database then I don't want to generate the file and vice-versa, if the file generation fails for whatever reason (such as out of space) I want to rollback the database transaction. I know there is an article about doing file transactions (http://www.onjava.com/lpt/a/1273) but I was thinking that since 2002 when this article was published someone has written a library that is suitable to use in a j2ee environment.
Thank you,
Costa
It's not the only driver that can work with any DB. Any type 3 driver will do just the same as it basically needs another driver on the server end to actually connect to the DB (a type 3 driver is actually nothing more than a proxy, possibly including secure communication and other features).
However, it will not be able to join tables from different databases, even of the same type (e.g. 2 SQL Servers). If the server doesn't do it itself the JDBC driver won't.
Alin.
Similar Messages
-
Uploaded Files stored in Oracle 10G database or in Unix File system
Hey All,
I am trying to understand best practices on storing uploaded files. Should you store within the database itself (this is the current method we are using by leveraging BLOB storage) or use a BFILE locator to use the files system storage (we have our DB's on UNIX) . . .or is there another method I should be entertaining? I have read arguments on both sides of this question. I wanted to see what answers forum readers could provide!! I understand there are quite a few factors but the situation I am in is as follows:
1) Storing text and pdf documents.
2) File sizes range from a few Kb to up to 15MB in size
3) uploaded files can be deleted and updated / replaced quite frequently
Right now we have an Oracle stored procedure that is uploading the files binary data into a BLOB column on our table. We have no real "performance" problems with this method but are entertaining the idea of using the UNIX file system for storage instead of the database.
Thanks for the insight!!
Anthony RoederAnthony,
First word you must learn here in this forum is RESPECT.
If you require any further explanation, just say so.
BLOB compared with BFILE
Security:
BFILEs are inherently insecure, as insecure as your operating system (OS).
Features:
BFILEs are not writable from typical database APIs whereas BLOBs are.
One of the most important features is that BLOBs can participate in transactions and are recoverable. Not so for BFILEs.
Performance:
Roughly the same.
Upping the size of your buffer cache can make a BIG improvement in BLOB performance.
BLOBs can be configured to exist in Oracle's cache which should make repeated/multiple reads faster.
Piece wise/non-sequential access of a BLOB is known to be faster than a that of a BFILE.
Manageability:
Only the BFILE locator is stored in an Oracle BACKUP. One needs to do a separate backup to save the OS file that the BFILE locator points to. The BLOB data is backed up along with the rest of the database data.
Storage:
The amount of table space required to store file data in a BLOB will be larger than that of the file itself due to LOB index which is the reason for better BLOB performance for piece wise random access of the BLOB value. -
Store large volume of Image files, what is better ? File System or Oracle
I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
I am not sure if I have to use File System to store images or database (blob or clob).
Until now I only used File System.
Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
Thank's for help.
Best Regards,
Eduardo
Brazil.1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
Justin -
STORE ATTACHMENTS IN A FILE SYSTEM
Currently when the users use the attachments it gets stored in the database level in fnd_lobs
table and it seems the volume in our production db is shooting up daily. We wanted to know is
there any way of storing the attachments outside the database for large objects? is it possible to
store them on the OS level or on a NAS mount? We are planning to leverage the std attachment
feature for our iExpense module which may shoot up the DB space to 100-200 GB per year. So please
let us know the possible options to reduce the space issue in DB causing by the attachments.BFILE Datatype
The BFILE datatype enables access to binary file LOBs that are stored in file systems outside Oracle Database. A BFILE column or attribute stores a BFILE locator, which serves as a pointer to a binary file on the server file system. The locator maintains the directory name and the filename.
You can change the filename and path of a BFILE without affecting the base table by using the BFILENAME function. Please refer to BFILENAME for more information on this built-in SQL function.
Correction in last sentence below; thomas.chang, 8/26/04.
Binary file LOBs do not participate in transactions and are not recoverable. Rather, the underlying operating system provides file integrity and durability. BFILE data can be up to 232-1 bytes, although your operating system may impose restrictions on this maximum.
The database administrator must ensure that the external file exists and that Oracle processes have operating system read permissions on the file.
The BFILE datatype enables read-only support of large binary files. You cannot modify or replicate such a file. Oracle provides APIs to access file data. The primary interfaces that you use to access file data are the DBMS_LOB package and the Oracle Call Interface (OCI). -
Btrfs May Be The Default File-System For Fedora 16
Having a somewhat odd interest in different types of filesystems, I'd thought I'd share this snippet I wondered across.
Brought up on the Fedora development list are the plans for Btrfs in Fedora, which provides a target of Fedora 16 when EXT4 will be replaced by Btrfs as the default Linux file-system on new installations.
Fedora was one of the first distributions to deploy support for installing the Linux distribution to a Btrfs root file-system, while the Moblin/MeeGo camp has already turned to it as the default file-system, and now Fedora may finally have everything ready and are comfortable with the state of this Oracle-sponsored file-system.
With the intended Fedora deployment of Btrfs, they will also switch from using LVM as the disk's volume manager to instead relying upon the volume management abilities built into Fedora itself. Fedora / Red Hat developers have previously already worked on taking advantage of other Btrfs features, like snapshots, to provide system roll-back support by creating a copy-on-write snapshot prior to each yum/RPM package transaction.
In order to meet the Fedora 16 Btrfs target, Btrfs support needs to be merged into Fedora's GRUB package (or a separate non-Btrfs /boot file-system must be used), Red Hat's Anaconda must be hooked in to properly utilize Btrfs and its volume management features, and the Fedora LiveCD must be re-worked a bit to handle Btrfs. Additionally, the fsck support for Btrfs must be completed. Oracle's Chris Mason is nearly (circa 90%) complete with this task.
Before Fedora 16 arrives we still have Fedora 15 to worry about, which will continue to offer Btrfs as a file-system option while EXT4 remains the default.
Canonical is also evaluating when to switch to the Btrfs file-system and we may see that move similarly happen for Ubuntu 11.10.
This would seem to imply that fedora is giving btrfs the big thumbs up! They currently release twice a year, and fedora 15 is (I believe) Due out this May, so it wouldnt be too wrong to surmise that come Christmas btrfs would be the standard install option. However, I have always thought of fedora as being a little bit of a testing bed for red hat enterprise, so reading between the lines, I am thinking they want to release btrfs to the wild, and get the bugs sqwished so it can be incorporated into red hat, which is a good thing all round, for all concerned.
Quote blatantly ripped from phoronix.
wikipedia btrfs
kernel.org btrfsThere are, as I understand it, three big issues keeping Btrfs from hitting its final "stable" release:
1) No true fsck (as of November, a fix is due "any time now")
2) The logic used for subvolumes (a sort of "virtual LVM" setup) in Btrfs is wholly different from that used in Ext*, JFS, NTFS, HFS, etc; because of this, there isn't yet a single script/program that will accurately display space consumption. Instead, the user needs to run a couple different commands and extrapolate. A little annoying, but it doesn't affect stability.
3) A bug in the COW aspect of the defrag command which, if run with an incorrect flag, can cause a doubling in the amount of metadata on the filesystem (and thus the amount of space it consumes). Again, it won't wreck anything, but would make management tedious if one needs to restore a snapshot to get rid of extra cruft.
These are all mentioned in the FAQ's on the Btrfs wiki, though the wiki doesn't give priority to any of them. I'm guessing that, if Fedora 16 will be using this later in the year, the Oracle folks have told the Red Hat folks that they're planning on plowing through these issues very soon.
Last edited by ANOKNUSA (2011-03-01 14:55:46) -
SC 3.0 file system failover for Oracle 8i/9i
I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
The SA request states:
"The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
Thanks for your help ...
-jThe problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
Before this goes up in flames, let me speak from real world experience.
Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly... -
Migrating Essbase cube across versions via file system
A large BSO cube has been taking much longer to complete a 'calc all' in Essbase 11.1.2.2 than on Essbase 9.3.1 despite all Essbase.cfg, app and db settings being same (https://forums.oracle.com/thread/2599658).
As a last resort, I've tried the following-
1. Calc the cube on the 9.3.1 server.
2. Use EAS Migration Wizard to migrate the cube from the 9.3.1 server to the 11.1.2.2 server.
3. File system transfer of all ess*.ind and ess*.pag from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server does not yet return any data).
4. File system transfer of the dbname.esm file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server returns an "unable to load database dbname" error and an "Invalid transaction status for block -- Please use the IBH Locate/Fix utilities to find/fix the problem" error).
5. File system transfer of the dbname.tct file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (and voila! Essbase returns data from the 11.1.2.2 server and numbers match with the 9.3.1 sever).
This almost seems too good to be true. Can anyone think of any dangers of migrating apps this way? Has nothing changed in file formats between Essbase 9.x and 11.x? Won't not transferring the dbname.ind and dbname.db files cause any issues down the road? Thankfully we are soon moving to ASO for this large BSO cube, so this isn't a long term worry.Freshly install the Essbase 11.1.2.2 on Window server 2008 r-2 with the recommended hardware specification. After Installation configure 11.1.2.2 with the DB/Schema
Take the all data back up of the essbase applications using script export or directly exporting from the cube.
Use the EAS Migration wizard to migrate the essbase applications
After the Migrating the applications successfully,reLoad all the data into cube.
For the 4th Point
IBH error generally caused when there is a mismatch in the index file and the PAG file while e executing the calculation script .Possible solutions are available
The recommended procedure is:
a)Disable all logins.
alter application sample disable connects;
b)Forcibly log off all users.
alter system logout session on database sample.basic;
c)Run the MaxL statement to get invalid block header information.
alter database sample.basic validate data to local logfile 'invalid_blocks';
d)Repair invalid block headers
alter database sample.basic repair invalid_block_headers;
Thanks,
Sreekumar Hariharan -
Decision on File system management in Oracle+SAP
Hi All,
In my production system we use to have /oracle/SID/sapdata1 and oracle/SID/sapdata2. Initially there was many datafiles assigned to the table sapce PSAPSR3, few as autoextend on and few as autoextend off. As per my understanding DB02 shows you the information just tablespace wise it will report AUTOEXTEND ON as soon as at least one of the datafiles has AUTOEXTEND ON. In PSAPSR3 all the datafile with autoextend ON are from SAPDATA1 which has only 50 GBs left. All the files as Autoextend OFF are from SAPDATA2 which has 900 GBs of sapce left.
Now the question is :
1.Do I need to request for additional space for SAPDATA1 as some of the tablespaces are at the edge of autoextend and that much space is not left in the FS(sapadat1) , then how will they extend? DB growth is 100GB per month.
2.We usually were adding 10 GB of datafile in the tablespace with 30GB as autoextend.
Can we add another datafile from sapdata2 this time with autoextend ON and the rest will be taken care automatically.
Pleae suggest.
Regards,
VIckyHi Vicky,
As you have 100GB/month growth suggestion here would be
1) Add 2 more mount points sapdata3 and sapdata4 with around 1 TB space.
This is to distribute data across 4 data partitions for better performance
2) As sapdata1 has datafiles with auto extend ON, you need to extend the file system to 500 GB atleast so that whenever data is written on datafiles under sapdata1, it will have space to grow using autoextend feature. Without sufficient disk space it may lead to space problem and transaction may result in dump.
3) No need to change anything on sapdata2 as you already have 900GB free space
Hope this helps.
Regards,
Deepak Kori -
BIP and Siebel server - file system and load balancing question
1. I just need to understand whenever the reports are generated through BIP, are there reports stored on some local directory (say Reports) on the BIP server or Siebel File system? If on a File system how will archiving policy be implemented.
2. When we talk of load balancing BIP Server. Can the common load balancer be used for BIP and Siebel servers?
http://myforums.oracle.com/jive3/thread.jspa?threadID=335601Hi Sravanthi,
Please check the below for finding ITS and WAS parameters from backend :
For ITS - Go to SE37 >> Utilities >> Setting >> Click on ICON Right Top Corner in popup window >> Select Internet Transaction Server >> you will find the Standard Path and HTTP URL.
For WAS - Go to SE37 >> Run FM - RSBB_URL_PREFIX_GET >> Execute it >> you will find PRefix and PAth paramter for WAS.
Please refer to this may help step-by-step : How-to create a portal system for using it in Visual Composer
Hope it helps
Regards
Arun -
Windows 8.1 File System Performance Down Compared to Windows 7
I have a good workstation and a fast SSD array as my boot volume.
Ever since installing Windows 8.1 I have found the file system performance to be somewhat slower than that of Windows 7.
There's nothing wrong with my setup - in fact it runs as stably as it did under Windows 7 on the same hardware with a similar configuration.
The NTFS file system simply isn't quite as responsive on Windows 8.1.
For example, under Windows 7 I could open Windows Explorer, navigate to the root folder of C:, select all the files and folders, then choose
Properties. The system would count up all the files in all the folders at a rate of about
30,000 files per second
the first time, then about 50,000 files per second the next time, when all the file system data was already cached in RAM.
Windows 8.1 will enumerate roughly
10,000 files per second the first time, and around
18,000 files per second the second time -
a roughly 1 to 3 slowdown. The reduced speed once the data is cached in RAM implies that something in the operating system is the bottleneck.
Not every operation is slower. I've benchmarked raw disk I/O, and Windows 8.1 can sustain almost the same data rate, though the top speed is a little lower. For example, Windows 7 vs. 8 comparisons using the ATTO speed benchmark:
Windows 7:
Windows 8:
-Noel
Detailed how-to in my eBooks:
Configure The Windows 7 "To Work" Options
Configure The Windows 8 "To Work" OptionsNo worries, and thanks for your response.
The problem can be characterized most quickly by the slowdown in enumerating files in folders. Unfortunately, besides some benchmarks that show only an incremental degradation in file read/write performance, I don't have any good before/after
measurements of other actual file operations.
Since posting the above I have verified:
My system has 8dot3 support disbled (same as my Windows 7 setup did).
Core Parking is disabled; CPU benchmarks are roughly equivalent to what they were.
File system caching is configured the same.
CHKDSK reports no problems
C:\TEMP>fsutil fsinfo ntfsInfo C:
NTFS Volume Serial Number : 0xdc00eddf00edc11e
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x00000000df846fff
Total Clusters : 0x000000001bf08dff
Free Clusters : 0x000000000c9c57c5
Total Reserved : 0x0000000000001020
Bytes Per Sector : 512
Bytes Per Physical Sector : 512
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000053f00000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000000002
Mft Zone Start : 0x0000000008ad8180
Mft Zone End : 0x0000000008ade6a0
Resource Manager Identifier : 2AFD1794-8CEE-11E1-90F4-005056C00008
C:\TEMP>fsutil fsinfo volumeinfo c:
Volume Name : C - NoelC4 SSD
Volume Serial Number : 0xedc11e
Max Component Length : 255
File System Name : NTFS
Is ReadWrite
Supports Case-sensitive filenames
Preserves Case of filenames
Supports Unicode in filenames
Preserves & Enforces ACL's
Supports file-based Compression
Supports Disk Quotas
Supports Sparse files
Supports Reparse Points
Supports Object Identifiers
Supports Encrypted File System
Supports Named Streams
Supports Transactions
Supports Hard Links
Supports Extended Attributes
Supports Open By FileID
Supports USN Journal
I am continuing to investigate:
Whether file system fragmentation could be an issue. I think not, since I measured the slowdown immediately after installing Windows 8.1.
All of the settings in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
Thank you in advance for any and all suggestions.
-Noel
Detailed how-to in my eBooks:
Configure The Windows 7 "To Work" Options
Configure The Windows 8 "To Work" Options -
DB02 showing 80% full data where as full file system is empty
hi all,
In DB02 transaction of my BW production server i'm getting following entry on Database tab:
Name DEFAULT
DB system: ORA
Size: 45,83 GB Total size: 156,25 GB
Free Size: 9,16 GB Total free size: 119,58 GB
Used: 80 % Total used: 23 %
where as in file system i have hardly any space filled.
Now what does this 80% used signifies?
Where does my data goes while data loading? Does it goes in this module? if so then during next dataload my database will be full?
kindly help.
Regards,
Priyaok i got it..
following is the details: for daily analysis:
Date Tablespace name Size(MB) Chg.size(MB) Free(MB) Chg. #extents/day Increment by Chg. used (%) Autoextend User blocks Chg.tot.size (MB) Total free(MB) Chg. total free (MB) Total used(%) Chg. tot. used(%) #Files Chg.#Files #Segments Chg. # segments #Extents Chg. # extents
6/24/2010 PSAPSR3 24660.00 460.00 13.06 -0.88 100 0 YES 40000.00 0.00 15353.06 -460.88 62 2 4 0 50421 0 74674 102
6/23/2010 PSAPSR3 24200.00 320.00 13.94 -4.62 100 0 YES 40000.00 0.00 15813.94 -324.62 60 0 4 0 50421 0 74572 114
6/22/2010 PSAPSR3 23880.00 400.00 18.56 -2.00 100 0 YES 40000.00 0.00 16138.56 -402.00 60 1 4 0 50421 0 74458 62
6/21/2010 PSAPSR3 23480.00 360.00 20.56 -4.07 100 0 YES 40000.00 0.00 16540.56 -364.07 59 1 4 0 50421 0 74396 138
6/20/2010 PSAPSR3 23120.00 300.00 24.63 11.57 100 0 YES 40000.00 0.00 16904.63 -288.43 58 1 4 0 50421 0 74258 96
6/19/2010 PSAPSR3 22820.00 60.00 13.06 -11.63 100 0 YES 40000.00 0.00 17193.06 -71.63 57 0 4 0 50421 3 74162 112
6/18/2010 PSAPSR3 22760.00 80.00 24.69 8.63 100 0 YES 40000.00 0.00 17264.69 -71.37 57 0 4 0 50418 0 74050 66
6/17/2010 PSAPSR3 22680.00 60.00 16.06 4.68 100 0 YES 40000.00 0.00 17336.06 -55.32 57 0 4 0 50418 0 73984 67
6/16/2010 PSAPSR3 22620.00 60.00 11.38 -9.43 100 0 YES 40000.00 0.00 17391.38 -69.43 57 1 4 0 50418 0 73917 72
6/15/2010 PSAPSR3 22560.00 60.00 20.81 0.68 100 0 YES 40000.00 0.00 17460.81 -59.32 56 0 4 0 50418 0 73845 63
6/14/2010 PSAPSR3 22500.00 60.00 20.13 -2.43 100 0 YES 40000.00 0.00 17520.13 -62.43 56 0 4 0 50418 0 73782 65
6/13/2010 PSAPSR3 22440.00 80.00 22.56 10.56 100 0 YES 40000.00 0.00 17582.56 -69.44 56 0 4 0 50418 2 73717 100
6/12/2010 PSAPSR3 22360.00 280.00 12.00 0.06 100 0 YES 40000.00 0.00 17652.00 -279.94 56 1 4 0 50416 0 73617 92
6/11/2010 PSAPSR3 22080.00 340.00 11.94 -5.81 100 0 YES 40000.00 0.00 17931.94 -345.81 55 1 4 0 50416 3 73525 180
6/10/2010 PSAPSR3 21740.00 380.00 17.75 -10.06 100 0 YES 40000.00 0.00 18277.75 -390.06 54 1 4 0 50413 0 73345 297
6/9/2010 PSAPSR3 21360.00 380.00 27.81 3.12 100 0 YES 40000.00 0.00 18667.81 -376.88 53 1 4 0 50413 0 73048 150
6/8/2010 PSAPSR3 20980.00 380.00 24.69 2.25 100 0 YES 40000.00 0.00 19044.69 -377.75 52 1 4 0 50413 0 72898 148
6/7/2010 PSAPSR3 20600.00 400.00 22.44 16.56 100 0 YES 40000.00 0.00 19422.44 -383.44 51 1 4 0 50413 0 72750 180
6/6/2010 PSAPSR3 20200.00 340.00 5.88 -12.50 100 0 YES 40000.00 0.00 19805.88 -352.50 50 0 4 0 50413 0 72570 194
6/5/2010 PSAPSR3 19860.00 360.00 18.38 1.57 100 0 YES 40000.00 0.00 20158.38 -358.43 50 1 4 0 50413 0 72376 252
6/4/2010 PSAPSR3 19500.00 380.00 16.81 -0.88 100 0 YES 40000.00 0.00 20516.81 -380.88 49 1 4 0 50413 0 72124 322
6/3/2010 PSAPSR3 19120.00 180.00 17.69 6.00 100 0 YES 40000.00 0.00 20897.69 -174.00 48 1 4 0 50413 0 71802 251
6/2/2010 PSAPSR3 18940.00 8780.00 11.69 8.69 100 0 YES 40000.00 0.00 21071.69 -8771.31 47 22 4 0 50413 6294 71551 14269
For weekly:
Date Tablespace name Size(MB) Chg.size(MB) Free(MB) Chg.free(MB) Increment by Chg. used(%) Autoextend User blocks Chg.tot.size (MB) Total free(MB) Chg. total free(MB) Total used(%) Chg.tot.used(%) #Files Chg.#Files #Segments Chg. # segments #Extents Chg. # extents
6/21/2010 PSAPSR3 23480.00 980.00 20.56 0.43 100 0 YES 40000.00 0.00 16540.56 -979.57 59 3 4 0 50421 3 74396 614
6/14/2010 PSAPSR3 22500.00 1900.00 20.13 -2.31 100 0 YES 40000.00 0.00 17520.13 -1902.31 56 5 4 0 50418 5 73782 1032
6/7/2010 PSAPSR3 20600.00 1660.00 22.44 10.75 100 0 YES 40000.00 0.00 19422.44 -1649.25 51 4 4 0 50413 0 72750 1199
6/2/2010 PSAPSR3 18940.00 8780.00 11.69 8.69 100 0 YES 40000.00 0.00 21071.69 -8771.31 47 22 4 0 50413 6294 71551 14269
3/17/2010 PSAPSR3 10160.00 0.00 3.00 0.00 100 0 YES 40000.00 0.00 29843.00 0.00 25 0 4 0 44119 0 57282 0
For monthly only two eneteries:
Date Tablespace name Size(MB) Chg.size(MB) Free(MB) Chg. #extents/day Increment by Chg. used (%) Autoextend User blocks Chg.tot.size (MB) Total free(MB) Chg. total free (MB) Total used(%) Chg.used(%) #Files Chg.#Files #Segments Chg. # segments #Extents Chg. # extents
6/2/2010 PSAPSR3 18940.00 8780.00 11.69 8.69 100 0 YES 40000.00 0.00 21071.69 -8771.31 47 22 4 0 50413 6294 71551 14269
3/17/2010 PSAPSR3 10160.00 0.00 3.00 0.00 100 0 YES 40000.00 0.00 29843.00 0.00 25 0 4 0 44119 0 57282 0
Regards,
Priya -
Transport Request doesn't appear in File System
Hello everybody,
i want to Transport a Request via File System. If i release the Request it doesn't appear in File System. The Logfile looks like this:
1 ETP199X######################################
1 ETP183 EXPORT PREPARATION
1 ETP101 transport order : "DA5K900033"
1 ETP102 system : "DA5"
1 ETP108 tp path : "tp"
1 ETP109 version and release : "380.03.58" "720"
1 ETP198
2 EPU230XExecution of the export pre-processing methods for request "DA5K900033"
4 EPU111 on the application server: "SRV64"
4 EPU138 in client : "100"
2 EPU235XStart: Version creation of the objects of the request "DA5K900033"
3 EPU237 Version creation started as update request
2 EPU236 End: Version creation of the objects of the request "DA5K900033"
2 EPU231XStart: Adjusting the object directory for the objects of the request "DA5K900033"
2 EPU232 End: Adapting the object directory for the objects of the request "DA5K900033"
2 ETN085 "Adding component vector" " " " " " "
2 ETN085 "Finished." " " " " " "
1 ETP183 EXPORT PREPARATION
1 ETP110 end date and time : "20140519103034"
1 ETP111 exit code : "0"
1 ETP199 ######################################
I've allready granted Everyone Full Control to the \trans Directory for testing purposes.
OS: Windows Server 2008 R2
DB: MS SQL Server 2008 R2
Any suggestions?
Best regards
DominikHi Dominik
What is the target of this transport request? Are you sure it's not marked as "local" in transaction SE01 for example? Because if it is a local transport request, it will not generate a file.
Best regards
Tom -
RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory
The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
I very much appreciate your advice and recommendation, TerryArf,
I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry -
HTTP destination PI_INTEGRATIONSERVER missing (system , transaction SM59)
Hi Experts,
I am working on ABAP Proxy -> File scenario. When I trigger message in ECC, it is giving error.
When I go to SXI_Monitor in ECC, the message is shown as red with error message "HTTP destination PI_INTEGRATIONSERVER missing (system , transaction SM59)". But the RFC destination PI_INTEGRATIONSERVER exists and when i tested this connection I got 500 empty request received message.
I have configured in SXMB_ADM the ECC system as application system and mentioned 'dest://PI_INTEGRATIONSERVER' as corresponding integration server.
Could you please help me in resolving this issue?
Thanks and regards,
PrasadHi Prasad,
Ensure below items in your ECC system.
1. goto SM59
2. ensure your RFC - PI_INTEGRATIONSERVER has this entry '/sap/xi/engine?type=entry' (without quots)
3. Target Host has value your PI/XI hostname and correct port number (HTTP)
4, for Logon& Security Tab, ensure to use userID PIAPPLUSER and correct password (saved)
or Service user of Integration engine (maintained in Exchange profile)
You can write back to me if any further help required.
Regards
Sekhar
Edited by: sekhar on Dec 24, 2009 6:57 PM -
Any way within XI to explore the file system
Hi all
please let me know whether thereu2019s a way from within XI to explore the file system? For the abap part we can use transaction u201CAL11u201D to browse the directories mounted to the Application server, but I do not expect our directory to be mapped to the server.
Kind Regards,
Anshu KumarHi ,
Thanks for you answer ,but I am asking without remote desktop connection and vpn.is there anyway ?
Kind Regards,
Kumar
Maybe you are looking for
-
How do I make a reference that I can assign data to? Or,how do I mime void*
OK, I'm trying to do something where a class has 20+ variables, and I want to make a small, quick, and easy programming interface for it. There will be an assignment function that takes an integer indicating one of the variables, and an object, and a
-
How do I unpair my audio clips
I have two audio tracks, one from an ambient mic, one from a radio clip mic. I want to mix them differently, but can't de-link them. The linking symbol is not showing at the beginning of the timeline, so they should be able to be adjusted individuall
-
I attempted to upgrade my iPhone 4 to iOS 5.1.1. I selected 'back up' from the phone in the device menu and iTunes confirmed that the phone had been backed up. I then installed iOS 5.1.1 which wiped everything on the phone & now I am getting the erro
-
I seem to have lost albums in iphoto and i can't find them in time machine either
I have lost the last few albums created in iphoto and they don't seem yo be in time machine either
-
I am trying to add another license on creative cloud anyone know how to do this?