Disk file operations I/O waits
Hi,
I have a production database running 11.2.0.3 on a Windows server.
It has been running for about a year without problems, but today we experiences big performance problems.
When I take a look at the AWR report we have "DIsk file operations I/O" waits for 30% of the DBTime.
Checking the AWR for last week we had no such waits.
The disks are on SAN. and the SAN guys can not see any problems so far.
Any suggestions? Why do we suddenly have these kind of waits??
Thanks!
Brg
jio
954496 wrote:
Hi,
I have a production database running 11.2.0.3 on a Windows server.
It has been running for about a year without problems, but today we experiences big performance problems.
Quantify "big"?
Is every SQL slow?
Even manual SQL issued outside of the application?
Consider providing us details as enumerated in URL below.
How do I ask a question on the forums?
Search for "related documents" as Altavista or Google do it?
Handle: 954496
Status Level: Newbie
Registered: Aug 23, 2012
Total Posts: 12
Total Questions: 4 (4 unresolved)
Why do you waste your time & our time here; since you NEVER get answers to your questions?
Similar Messages
-
How do I the interpret "Disk file operations I/O" wait event?
I have a large and very busy batch database. All of a sudden the "Disk file operations I/O" wait event is in the top 5 in AWR.
The manual page isn't very helpful:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/waitevents003.htm#insertedID40
Disk file operations I/O
This event is used to wait for disk file operations (for example, open, close, seek, and resize). It is also used for miscellaneous I/O operations such as block dumps and password file accesses.
So here is my question: What exactly is going on when I see this wait event? Why doesn't it show up as one of the other I/O events? Can I make it go away? Should I make it go away?
DRsb92075 wrote:
All of a sudden the "Disk file operations I/O" wait event is in the top 5 in AWR.Top wait event
In EVERY Top Wait Event list, one wait event will ALWAYS be on top as #1; by definition of the list.
Simply because any item, even #1, appears on this list does not mean this is a problem & needs to be fixed.
If the Top Wait Event accounts for only 5 seconds out of a 1 hour sample,
then reducing it to ZERO won't measurably improve overall application performance.
The actual Time Waited is required to determine if it is a problem or not.It's taking 20% of time in a 15 minute sample. Anything that takes 20% of deserves to be understood....So: What actually causes it?
DR -
USB Drive Resets during file copy, interrupting file operations
Fresh Arch Linux install, working USB thumb drive (Windows, previous Arch Linux setups, etc.), formatted FAT32.
Seems to randomly reset, eject, and auto-re-mount during extensive file operations (i.e. copying 4gb worth of 800 files from the USB thumb drive to disk).
I've tried various USB ports to no avail; no external hub being used.
Thoughts? Not sure where to begin diagnosing:
dmesg:
usb 1-1: configuration #1 chosen from 1 choice
scsi17 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 15
usb-storage: waiting for device to settle before scanning
scsi 17:0:0:0: Direct-Access OCZ RALLY2 1100 PQ: 0 ANSI: 0 CCS
sd 17:0:0:0: Attached scsi generic sg2 type 0
usb-storage: device scan complete
sd 17:0:0:0: [sdb] 31326208 512-byte logical blocks: (16.0 GB/14.9 GiB)
sd 17:0:0:0: [sdb] Write Protect is off
sd 17:0:0:0: [sdb] Mode Sense: 43 00 00 00
sd 17:0:0:0: [sdb] Assuming drive cache: write through
sd 17:0:0:0: [sdb] Assuming drive cache: write through
sdb: sdb1
sd 17:0:0:0: [sdb] Assuming drive cache: write through
sd 17:0:0:0: [sdb] Attached SCSI removable disk
usb 1-1: reset high speed USB device using ehci_hcd and address 15
sd 17:0:0:0: [sdb] Unhandled error code
sd 17:0:0:0: [sdb] Result: hostbyte=0x07 driverbyte=0x00
end_request: I/O error, dev sdb, sector 7589504
sd 17:0:0:0: [sdb] Unhandled error code
sd 17:0:0:0: [sdb] Result: hostbyte=0x07 driverbyte=0x00
end_request: I/O error, dev sdb, sector 7589744
sd 17:0:0:0: [sdb] Unhandled error code
sd 17:0:0:0: [sdb] Result: hostbyte=0x07 driverbyte=0x00
end_request: I/O error, dev sdb, sector 7589776
usb 1-1: USB disconnect, address 15
FAT: bread failed in fat_clusters_flush
FAT: FAT read failed (blocknr 3805)
FAT: Directory bread(block 24147944) failed
FAT: FAT read failed (blocknr 3818)
<snip lots of repeats>
FAT: FAT read failed (blocknr 4032)
FAT: bread failed in fat_clusters_flush
FAT: bread failed in fat_clusters_flush
FAT: bread failed in fat_clusters_flush
usb 1-1: new high speed USB device using ehci_hcd and address 16
usb 1-1: configuration #1 chosen from 1 choice
scsi18 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 16
usb-storage: waiting for device to settle before scanning
scsi 18:0:0:0: Direct-Access OCZ RALLY2 1100 PQ: 0 ANSI: 0 CCS
sd 18:0:0:0: Attached scsi generic sg2 type 0
usb-storage: device scan complete
sd 18:0:0:0: [sdb] 31326208 512-byte logical blocks: (16.0 GB/14.9 GiB)
sd 18:0:0:0: [sdb] Write Protect is off
sd 18:0:0:0: [sdb] Mode Sense: 43 00 00 00
sd 18:0:0:0: [sdb] Assuming drive cache: write through
sd 18:0:0:0: [sdb] Assuming drive cache: write through
sdb: sdb1
sd 18:0:0:0: [sdb] Assuming drive cache: write through
sd 18:0:0:0: [sdb] Attached SCSI removable disk
lsusb -v for the drive:
Bus 001 Device 016: ID 0325:ac02 OCZ Technology Inc ATV Turbo / Rally2 Dual Channel USB 2.0 Flash Drive
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x0325 OCZ Technology Inc
idProduct 0xac02 ATV Turbo / Rally2 Dual Channel USB 2.0 Flash Drive
bcdDevice 11.00
iManufacturer 1 OCZ Technology
iProduct 2 RALLY2
iSerial 3 AA04012700275633
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 32
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0x80
(Bus Powered)
MaxPower 300mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 80 Bulk (Zip)
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 255
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 255
Device Qualifier (for other device speed):
bLength 10
bDescriptorType 6
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
bNumConfigurations 1
Device Status: 0x0000
(Bus Powered)
Last edited by lieut_data (2009-12-26 17:39:56)Jithin wrote:
When ever I copy some files or create a file in my external USB drive which is a FAT32 one. file permissions are not preserved.
From <http://support.apple.com/kb/HT3764>:
"The MS-DOS (FAT32) file system format does not support permissions, file owners, and groups. Such permissions are synthesized on Mac OS X with some default permissions. Because of this all files will have the same permissions […]"
So it's not "how it work in mac", but it's a limitation of the FAT32 file system. -
Very, very slow file operations
I needed to make a specific change to an exif field of some images. So what I did was to move identify the files that I needed to change in Lightroom (they were in various folders on my disk) and move them to other folders so they would be easy to operate on. Then I used exiftool to change the field in these files, and then I thought it would be easy to read in the changed data and move them back to their original folders in Lightroom. What I found included:
I tried to Synchronize Metadata on the folders after changing the exif field. One folder had about 500 images in it, the other about 2,000. After 3 hours, the progress bar had barely moved, and I aborted.
When I tried to quit Lightroom it said that it was "writing metadata", even tho the synchronization had changed the external data and it should only have been reading metadata. I quit anyways and rebooted.
After rebooting, and relaunching Lightroom I looked at the "metadata status" column in Grid view. It was (as always, it seems) completely wrong about which files were up to date.
I was able to "read metadata" on these 2,500 images in "only" about an hour or two. Note that I didn't ask Lightroom to update the cached images, only to read the metadata. Reading metadata at 1,000-2,000 images an hour seems exceptionally slow.
Now I am trying to move the files back to their correct folders. I dragged about 1,000 raw images (CR2 + xmp) from one folder to another in Lightroom. Three hours later it is about 50% finished according to the progress bar. I'm going to bed and hope it will finish by the morning. I have to move all the other images as well, but because Lightroom won't update the filter bars in Grid View while moving files, I can't start these moves until Lightroom finishes the first.
Do other people fine absolutely absurd speeds for file operations in Lightroom? Is there any solution other than quitting and rebooting (which doesn't always work)? I hate to quit in the middle of a synchronize, read metadata, or move files operation even when it seems to be barely chugging along, because I am afraid that when I abort the operation, I will somehow leave a file orphaned, or it's metadata out of sync (which happens to me all the time, but I never know why).
When I am doing other operations I also find that I can use Lightroom for only a few hours before it gets so slow that I quit and reboot, after which operations speed up considerably. This doesn't seem like it should be necessary for a "modern" program and operating system, but in my experience it is.
I'm using a Macintosh (4 core i7, 16 GB of RAM), Mavericks latest, Lightroom 5.5, and a RAID disk, so hardware shouldn't be limiting. During this last interminable copy operation, it appears that Lightroom is pegged at 100% of a processor (core).Sorry, but I don't think your workaround works. I am moving a bunch of files from one folder to various folders (using Lightroom to discriminate between files based on their exif/iptc values), then I change them using an outside program (exiftool), then I put them back in their original folders. This involves having Lightroom read in the new data in the files, which is what is so slow. Reconnecting, if I understand it, is for when you move a folder of files to another location, and Lightroom needs to be informed of that move.
As I pointed out, the problem is that Lightroom gets progressively and painfully slower the more you use it. But quitting and restarting can be annoying if you are in the middle of an operation because you can lose context (especially if you restart your computer, which seems to help).
And I do optimize my database nearly daily when using Lightroom.
I would spend the time to fully document this problem and report it as a bug, but my experience is that Adobe shows no evidence of actually reading my bug reports, let alone responding to them. -
Hi Folks,
I was trying to save basic search template master page "seattle.master" after making change to the template.
I have added just "CompanyName" folder and update the line below in seattle.master.
Change is this : <SharePoint:CssRegistration Name="Themable/CompanyName/corev15.css" runat="server"/>
When I save it, and refresh page on browser, it shows "Something went wrong" error.
ULS says the following error : "UserAgent not Available, file operation may not be optimized"
Pls let us know if there is a solution.
Any help Much appreciated !
Thanks,
Sal
Hi Salman,
Thanks for posting this issue,
Just remove this below given tag and check out. It might be happened that your control is conflicting with others.
Also, browse the below mentioned URL for more details
http://social.msdn.microsoft.com/Forums/office/en-US/b32d1968-81f1-42cd-8f45-798406896335/how-apply-custom-master-page-to-performance-point-dashboard-useragent-not-available-file?forum=sharepointcustomization
I hope this is helpful to you. If this works, Please mark it as Answered.
Regards,
Dharmendra Singh (MCPD-EA | MCTS)
Blog : http://sharepoint-community.net/profile/DharmendraSingh -
How do I access my encrypted User Account files from my Back Up hard drive? Time Machine was used to create the back up disk; File Vault was used to encrypt the files.
Thanks. I will try going through TM. Since my Simpletech is on the way out, I'll be plugging in a new external hard drive (other than the back-up drive) and trying to restore the library to the new drive. Any advice or warning if this is NOT the right thing to do?
Meanwhile, that is a great tip to do an alternate back-up using a different means. It's been tough to figure out how to "preserve access" to digital images and files for posterity, knowing the hardware will always fail/obsolesce sooner or later, and that "clouds" are only as good as their consistent and reliable accessibility. Upping the odds with redundancy will help dull the edge of my "access anxiety", though logically, it can never relieve it. Will look into
Carbon Copy Cloner. -
ACS v5.2 - Unable to update User integer attributes through File Operations
Hi,
I have created some internal users on ACS v5.2 and added some Unsigned Integer attributes for each user. I am trying to do a bulk update of these integer attributes using the File Operation facility. However no matter what number I put on the import template it doesn't get updated and displays a "0" in the user config.
The import template is validated successfully with no errors and also the string attributes are updated correctly.
There is a sort of work around of deleting the users and adding them back in with the updated values. But this is not feasible as it would reset their passwords. I have also tried saving the csv file in Open Ofifce instead of Excel
Has any one else come across this problem?
(I am unable to see this issue in the Release notes or Bug tool kit although there is a similar issue when updating devices in CSCth68051)Hi,
Thanks for the reply. I have managed to recreate the problem to show you but it is a bit more complicated than I first thought. The problem only occurs when the integer attributes are added after the user is created.
I created a dummy user. The MTL and TLS attributes were present before the user was added. I then added the XXX and ZZZ attributes afterwards and assigned them default values. The default values show up in the GUI config.
However when I export the database to a csv file only the values of the MTL and TLS attributes show up in the export file:
I then downloaded an import template and updated the integer values for TLS,MTL, XXX and ZZZ for the dummy user:
The file imports successfully with no errors. However, when I display the user config only the MTL and TLS attributes have changed. The XXX and ZZZ attributes have stayed the same.
I thought it might be because I was assigning a default value of 0 to the new attributes but I assigned ZZZ a default value of 1 and the same thing occurred. -
iTunes Match has stopped uploading - every file errors and says waiting - I have tried to delete the files and use other formats etc. I have had the service since Day 1 and NEVER had an issue. It didn't start until the Delete from Cloud switch to Hide from cloud - the files that do not upload show grayed out on my other devices.
Have you confirmed that you successfull purged iTunes Match by also looking on an iOS device? If so, keep in mind that Apple's servers may be experiencing a heavy load right now. They just added about 19 countries to the service and I've read a few accounts this morning that suggests all's not running perfectly right now.
-
URGENT: File Adapter List Files operation Issue
Hi All,
we are using List files operation in one of the SOA composite which lists all files available in the directory. what we observed files are not listing as for the timestamps.
is there any property to list all files ascending or descending based on time stamp?. we tried with ListSorter property which is suggested by Oracle,but it works for only INBOUND operations. [http://docs.oracle.com/cd/E23943_01/integration.1111/e10231/adptr_file.htm#BABBIJCJ]
Any suggestions will be greatly appreciated.Hi,
You can try 2 options:
1. You would need to capture/collect all the file names, you might have to use BPM and create a separate interface.
2. You can also pick up those files from the archive directory using FTP and push them using mail adapter.
Regards,
Pavan -
Unable to run a Batch File Operating System Command
Using XI 3.0, I am unable to run a Batch File Operating System Command After Message Processing.
My Batch file:
gpg -se -r BOA3RSKY --armor --passphrase-fd 0 %1 < C:\Progra~1\GNU\GnuPG\gpgin
My Command Line (ID scenario)
exec "cmd.exe /c C:\Progra~1\GNU\GnuPG\boagpg.bat %F"
If I execute
exec "cmd.exe /c type C:\Progra~1\GNU\GnuPG\boagpg.bat >xis.txt"
It displays the contents of boagpg.bat file in xis.txt.
I just don't understand why when I run the batch file, I would expect an %F.asc encrypted file in the same directory as the %F unencrypted file.
Any ideas?
or will I need Basis to create commands that will allow me to run GPG from XI Command Line?Check this links if its helpful
http://help.sap.com/saphelp_nw04/helpdata/en/bb/c7423347dc488097ab705f7185c88f/frameset.htm
/people/sap.user72/blog/2004/01/30/command-line-help-utility
Check this thread a similar problem
Process Integration (PI) & SOA Middleware
Note 841704 - XI File & JDBC Adapter: Operating system command
http://service.sap.com/sap/support/notes/841704
Try to see the below links
/people/michal.krawczyk2/blog/2005/08/17/xi-operation-system-command--error-catching
OS Command on FTP
OS command line script - Need help
FTP - Run OS Command before file processing
Note: reward points if solution found helpfull
Regards
Chandrakanth.k -
How to store images as disk files rather than in iphoto db
Is there a way to make iPhoto maintain the images as disk files, rather than import them into iphotos massive database file. I would prefer to use iphoto to just store a link to the photo, rather than the photo itself. That way I can work with the image in photoshop or whatnot without having to jerk around with the iPhoto database/
Welcome to the Apple Discussions. No need to find a workaround. You can use Photoshop from inside iPhoto.
Using Photoshop (or Photoshop Elements) as Your Editor of Choice in iPhoto.
1 - select Photoshop as your editor of choice in iPhoto's General Preference Section's under the "Edit photo:" menu.
2 - double click on the thumbnail in iPhoto to open it in Photoshop. When you're finished editing click on the Save button. If you immediately get the JPEG Options window make your selection (Baseline standard seems to be the most compatible jpeg format) and click on the OK button. Your done.
3 - however, if you get the navigation window that indicates that PS wants to save it as a PS formatted file. You'll need to either select JPEG from the menu and save (top image) or click on the desktop in the Navigation window (bottom image) and save it to the desktop for importing as a new photo.
This method will let iPhoto know that the photo has been editied and will update the thumbnail file to reflect the edit..
NOTE: With Photoshop Elements 6 the Saving File preferences should be configured: "On First Save: Save Over Current File". Also I suggest the Maximize PSD File Compatabilty be set to Always.
If you want to use both iPhoto's editing mode and PS without having to go back and forth to the Preference pane, once you've selected PS as your editor of choice, reset the Preferences back to "Open in main window". That will let you either edit in iPhoto (double click on the thumbnail) or in PS (Control-click on the thumbnail and seledt "Edit in external editor" in the Contextual menu). This way you get the best of both worlds
2 - double click on the thumbnail in iPhoto to open it in Photoshop. When you're finished editing click on the Save button. If you immediately get the JPEG Options window make your selection (Baseline standard seems to be the most compatible jpeg format) and click on the OK button. Your done.
3 - however, if you get the navigation window that indicates that PS wants to save it as a PS formatted file. You'll need to either select JPEG from the menu and save (top image) or click on the desktop in the Navigation window (bottom image) and save it to the desktop for importing as a new photo.
This method will let iPhoto know that the photo has been editied and will update the thumbnail file to reflect the edit..
TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto (iPhoto.Library for iPhoto 5 and earlier) database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
I've created an Automator workflow application (requires Tiger or later), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 6 and 7 libraries and Tiger and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.≤br>
Note: There now an Automator backup application for iPhoto 5 that will work with Tiger or Leopard. -
ORA-29283 invalid file operation
NLSRTL 10.2.0.5.0 Production
Oracle Database 10g Enterprise Edition 10.2.0.5.0 64bi
PL/SQL 10.2.0.5.0 Production
TNS for IBM/AIX RISC System/6000: 10.2.0.5.0 Productio
I am trying to get the content of a trace file generated for me.
Because I don't have privileges to log on the server and copy the trace file for me directly with some os user, I am doing the following:
1. I alter my session trace identifier to easier identify the trace file
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'Func01';2. I invoke DBMS_MONITOR
3. I run the procedure I want to monitor.
4. I disable the monitoring by calling DBMS_MONITOR
5. At this point I run the following query to identify my trace file:
select u_dump.value || '/' || instance.value || '_ora_' || v$process.spid || nvl2(v$process.traceid, '_' || v$process.traceid, null ) || '.trc'"Trace File"
from V$PARAMETER u_dump
cross join V$PARAMETER instance
cross join V$PROCESS
join V$SESSION on v$process.addr = V$SESSION.paddr
where 1=1
and u_dump.name = 'user_dump_dest'
and instance.name = 'instance_name'
and V$SESSION.audsid=sys_context('userenv','sessionid');It gives me: /ORACLE/MYDB/trace/MYDB_ora_3616822_Func01.trc
I have created directory in advanced on the path where the traces are stored:
CREATE OR REPLACE DIRECTORY trace_dir AS '/ORACLE/MYDB/trace/';
SELECT * FROM dba_directories WHERE directory_name = 'TRACE_DIR';
Output:
OWNER DIRECTORY_NAME DIRECTORY_PATH
SYS TRACE_DIR /ORACLE/MYDB/trace/I don't have rights to grant read, write on TRACE_DIR to my user, as I am not logged with SYS.
I created a table to store in it the lines from the trace file:
CREATE TABLE tmp_traces_tab
callnum NUMBER,
line NUMBER,
fileline CLOB
);Then I run the following PL/SQL block to retrieve the content of the trace and store it in the table T:
DECLARE
l_file UTL_FILE.file_type;
l_location VARCHAR2 (100) := 'TRACE_DIR';
l_filename VARCHAR2 (255) := 'MYDB_ora_3616822_Func01.trc';
l_text VARCHAR2 (32767);
l_line NUMBER := 1;
l_call NUMBER := 1;
BEGIN
-- Open file.
l_file := UTL_FILE.fopen (l_location, l_filename, 'r', 32767);
-- Read and output first line.
UTL_FILE.get_line (l_file, l_text, 32767);
INSERT INTO tmp_traces_tab (callnum, line, fileline) VALUES (l_call, l_line, l_text);
l_line := l_line + 1;
BEGIN
LOOP
UTL_FILE.get_line (l_file, l_text, 32767);
INSERT INTO tmp_traces_tab (callnum, line, fileline) VALUES (l_call, l_line, l_text);
l_line := l_line + 1;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
INSERT INTO tmp_traces_tab (callnum, line, fileline) VALUES (l_call, l_line, l_text);
l_line := l_line + 1;
UTL_FILE.fclose (l_file);
END;
/And when I run the code I get the error: ORA-29283 invalid file operation.
Is it possible to a role my user to be able to get the content of the trace files in the directory TRACE_DIR without having explicit READ , WRITE privileges on it?
My user currently has these roles:
select * from dba_role_privs where grantee = USER;
Output:
U1 OPR_ROLE_LOSS_SNAPSHOT_READER YES YES
U1 RESOURCE NO YES
U1 CONNECT NO YES
U1 DBA NO YES
U1 OPR_ROLE_SUPPORT_USER YES YESI know that on another db with different user I hit no errors when doing completely the same (of course the program unit I monitor is different).
That other user with which I have NO issues has these roles:
select * from dba_role_privs where grantee = USER;
Output:
U2 DBA NO YES
U2 EXEC_SYS_PACKAGES_ROLE NO YES
U2 EXECUTE_CATALOG_ROLE NO YES
U2 CONNECT NO YESVerdi wrote:
NLSRTL 10.2.0.5.0 Production
Oracle Database 10g Enterprise Edition 10.2.0.5.0 64bi
PL/SQL 10.2.0.5.0 Production
TNS for IBM/AIX RISC System/6000: 10.2.0.5.0 Productio
And when I run the code I get the error: ORA-29283 invalid file operation.
Is it possible to a role my user to be able to get the content of the trace files in the directory TRACE_DIR without having explicit READ , WRITE privileges on it?
My user currently has these roles:
select * from dba_role_privs where grantee = USER;
Output:
U1 OPR_ROLE_LOSS_SNAPSHOT_READER YES YES
U1 RESOURCE NO YES
U1 CONNECT NO YES
U1 DBA NO YES
U1 OPR_ROLE_SUPPORT_USER YES YESI know that on another db with different user I hit no errors when doing completely the same (of course the program unit I monitor is different).
Thanks for posting version alongwith other details.
TO my knowledge, No you cannot.
Privileges acquired via a Role are not valid in PL/SQL. You need to have explicit privileges. -
Share airport disk file between multiple user accounts
I am attempting to share my original airport disk file between the users on this computer. When I setup file sharing With Accounts the original file can no longer be seen.
I have an existing file, I just want other users to access that same file.
How can this be done?I am attempting to share my original airport disk file between the users on this computer. When I setup file sharing With Accounts the original file can no longer be seen.
I have an existing file, I just want other users to access that same file.
How can this be done? -
Laggy file operations windows 7 clients with windows server 2008 R2
Hope this is posted in the right place..
Ok so this is driving me insane...
I have a DNS/file server running Windows Server 2008 R2, and 5 client pc's running windows 7 x64 and x86.
All the client PC's have extremely laggy file operations when for example right clicking to create a new folder or opening certain files (AutoCAD). Talking about 3-10 second lag for a response.
However if I do a straight file transfer to test the transfer speed its all normal. Everything is connected @ gigabit network speeds.
This just seemed to happen one day a few months ago, and has remained ever since. Before that (couple years) it was not doing this...
Anyone have any ideas, I'm at a loss... I've checked the performance monitor on the server nothing seems abnormal when this happens.
Thank you,Hi,
Please test to disable all third party applications on file server to see if that is the cause. Sometimes such kind of issues are caused by antivirus program, firewall etc.
A quick step is to boot into Clean Boot mode as provided in following article:
http://support.microsoft.com/kb/929135
If you have any feedback on our support, please send to [email protected] -
Filestream Creation Unable to Open Physical File Operating System Error 259
Hey Everybody,
I have run out of options supporting a customer that is having an error when creating a database with a file stream. The error displayed is unable to open physical file operating system error 259 (No more data is available). We're using a pretty
standard creation SQL script that we aren't having issues with other customers:
-- We are going to create our data paths for the filestreams.
DECLARE @data_path nvarchar(256);
SET @data_path = (SELECT SUBSTRING(physical_name, 1, CHARINDEX(N'master.mdf', LOWER(physical_name)) - 1)
FROM master.sys.master_files
WHERE database_id = 1 AND file_id = 1);
-- At this point, we should be able to create our database.
EXECUTE ('CREATE DATABASE AllTables
ON PRIMARY
NAME = AllTables_data
,FILENAME = ''' + @data_path + 'AllTables_data.mdf''
,SIZE = 10MB
,FILEGROWTH = 15%
FILEGROUP FileStreamAll CONTAINS FILESTREAM DEFAULT
NAME = FSAllTables
,FILENAME = ''' + @data_path + 'AllTablesFS''
LOG ON
NAME = AllTables_log
,FILENAME = ''' + @data_path + 'AllTables_log.ldf''
,SIZE = 5MB
,FILEGROWTH = 5MB
GO
We are using SQL Server 2014 Express. File streams were enabled on the installation SQL Server. The instance was created successfully and we are able to connect to the database through SSMS. The user is using an encrypted Sophos.
We have tried the following:
1. Increasing the permissions of the SQL Server server to have full access to the folders.
2. Attempted a restore of a blank database and it failed.
There doesn't seem to be any knowledge base articles on this particular error and I am not sure what else I can do to resolve this. Thanks in advance for any help!Hi Ryan,
1)SQL Server(any version) can't be installed on encrypted drives. Please see a similar scenario in the following link
https://ask.sqlservercentral.com/questions/115761/filestream-and-encrypted-drives.html
2)I don't think there is any problem with permissions on the folder, if the user can create a database in the same folder. Am not too sure. Also see the article by
Jacob on configuring the FILESTREAM for SQL Server that describes how to configure FILESTREAM access level & creating a FILESTREAM enabled database
Hope this helps,
Thanks
Bhanu
Maybe you are looking for
-
Sub-reports DO NOT WORK in Crystal for Eclipse v2
Hi,<br /> <br /> We have recently upgraded from Crystal JRC to CR4E 12.2.202. Our application updates the JDBC connections at runtime to supply credentials.<br /> <br /> Everything seems to be working except sub reports. When using sub reports we ge
-
How do I deauthorize an iTunes account from one of 5 computers so I can add another computer?
I'm trying to add a new computer to my cloud account
-
Help - How to install Win XP in dual boot with Win 7 (first installed)?
Hi everybody! I'm having some difficult in install a win xp in dual boot with win 7 (which came with the G550 notebook). I modified the D: (Lenovo) partition to a primary partition to install the XP. Also I installed the EasyBCD to manange the boots
-
Dear Colleagues, At the company where I work,Users for Asset control are using; KO01 - To create an investment order. ZAFE_0002 - To set up capital project (Customised Application) KO88 - for Settlement of investment order. AS01 - For creation of ass
-
Osx mavericks 10.9.3 cannot open large folders on smb share
hi, This week I bought mij first macbook pro. so I new in osx. I have a ubuntu 14.04 server running with samba 4.1.6. From a Windows or linux computer is can acces the shares just fine. When i try to acces the share from my macbook i can open t