Command To ID File System Type
I was found a server running Arch. Powered on fine and seems to be running great. I was able to recover the root login however I would like to know what command I can run in a terminal to identify the file system type on the disk partitions.
[root@tiger /]# fdisk -l
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b7d3e
Device Boot Start End Blocks Id System
/dev/sda1 1 62 497983+ 82 Linux swap / Solaris
/dev/sda2 * 63 311 2000092+ 83 Linux
/dev/sda3 312 19457 153790245 83 Linux
Does fdisk -l not supply you with what you were asking for? You could also use the command blkid, but it does not show the filesystem ID...
Similar Messages
-
Oracle VM Server 2.2.1 FIle System Type
I have installed VM server 2.2.1, but i recently ran into disk size problems and want to resize. I have no space on the / partition and all my space on the /var/ovs/mount/"randomnumber". When i used the parted command "print" to see my partitions, the /var partition has no file system type such as ext3 or ext2. The / partition has ext3. Does anyone know why the /var/ovs/mount/"randomnumber" has no file system type because without this I can not resize the partition. Let me also say that when i installed VM server i pretty much took all the defaults so the partitions were created this way by default.
ThanksPaul_RealityTech wrote:
I have installed VM server 2.2.1, but i recently ran into disk size problems and want to resize. I have no space on the / partition and all my space on the /var/ovs/mount/"randomnumber". When i used the parted command "print" to see my partitions, the /var partition has no file system type such as ext3 or ext2. The / partition has ext3. Does anyone know why the /var/ovs/mount/"randomnumber" has no file system type because without this I can not resize the partition. Let me also say that when i installed VM server i pretty much took all the defaults so the partitions were created this way by default./var/ovs/mount/UUID is ocfs2 and cannot be shrunk in size. -
:: Running Hook [udev]
:: Triggering uevents...done
Root device '804' doesn't exist.
Creating root device /dev/root with major 8 and minor 4.
error: /dev/root: No such device or address
ERROR: Unable to determine the file system type of /dev/root:
Either it contains no filesystem, an unknown filesystem,
or more than one valid file system signature was found.
Try adding
rootfstype=your_filesystem_type
to the kernelcommand line.
You are now being dropped into an emergency shell.
/bin/sh: can't access tty; job control turned off
[ramfs /]# [ 1.376738] Refined TSC clocksource calibration: 3013.000 MHz.
[ 1.376775] Switching to clocksource tsc
That's what I get when I boot my Arch system. It worked fine for quite a while, but suddenly it ran into an error where the SCSI driver module was corrupt. I fixed it by reinstalling util-linux-ng and kernel26, however, I run into this issue now. http://www.pastie.org/2163181 < Link to /var/log/pacman.log for the month of July, just in case. Yes, I bought a new ATI/AMD Radeon HD 5450 this Saturday, but it seemed to work fine till this broke and it works fine on Ubuntu too, so I suppose we can rule it out.
Last edited by SgrA (2011-07-05 20:45:36)Autodetection failed on your first image, in both your previous kernel installs:
[2011-07-04 16:14] find: `/sys/devices': No such file or directory
Which means that sysfs was not mounted. You should be able to boot from the fallback image, which does not use autodetect. Figure out why /sys isn't mounted, as well, and fix that.
This is also a somewhat crappy bug in mkinitcpio that lets you create an autodetect image that's useless. It'll be fixed in the next version of mkinitcpio that makes it to core.
Last edited by falconindy (2011-07-04 17:41:19) -
OVM Manager 2.2.2: server pool error with file system type
I'm just getting started with OVM. I've installed OVM on one machine and the manager on another.
I created a server pool, which seemed to work OK but shows "Error" in the table under the "Server Pools" tab in the manager interface
When I edit it I see
Error: OVM-1011 OVM Manager communication with NNN.NNN.NN.NNN for operation Pre-check cluster root for Server Pool failed: <Exception: SR '/dev/sda3' not supported: type 'ocfs2.local' not in ['nfs', 'ocfs2.cluster']>
Can anyone explain this? Does this mean I can't use a local file system in OVM 2.2.2? I understood this was the case with OVM 3, which is why I went with 2.2.2.
Thanks.Roger Ford wrote:
Error: OVM-1011 OVM Manager communication with NNN.NNN.NN.NNN for operation Pre-check cluster root for Server Pool failed: <Exception: SR '/dev/sda3' not supported: type 'ocfs2.local' not in ['nfs', 'ocfs2.cluster']>
Can anyone explain this? Does this mean I can't use a local file system in OVM 2.2.2? I understood this was the case with OVM 3, which is why I went with 2.2.2.You can't created a clustered pool with a local filesystem. You need to format the filesystem with ocfs2 in clustered mode. -
What are the characteristics of the procfs file system type ?
Hi Solaris guys,
I walked through the Student Guide SA-239 to find information about procfs file system (/proc), but not yet. Could anyone here explain it for me ?
1. File ownership is determined by the credentials of the process.
2. It contains reference by file names to the opened files of the process.
3. Each process ID named directory has files that contain more detailed information about the process.
4. It contains a decimal number directory entry corresponding to a process ID.
I wonder whether these characteristics are correct or not. Please help me.
Thanks a lot !man -s4 proc
or
http://docs.sun.com/app/docs/doc/817-0683/6mgff29c4?q=procfs&a=view
Its a virual directory structure with a directory per process. These per process directories contain more directories and files that supply detailed information on that process eg per open file information.
Access to the per process directories is controlled by checking the credentials of the accessing process against the credentials of the
process whose procfs directory is being accessed.
tim -
Hi All,
I want to know about the file system of my hard disk (Fat32 or NTFS) using java. Can we do it using java
regards,
Maheshwaran DevarajWhen you say path you mean you want the path to print out in your HTML? Is that accurate? If so you generally need to constructe that path based on the path to the current component. So if you component is located at /content/mysite/en/about/mypage/jcr:content/parsys/image then the path to the image would generally be something like /content/mysite/en/about/mypage/jcr:content/parsys/image.img.jpg/1283829292873.jpg. The .img. selector triggers the servlet associated with the foundation parbase - /libs/foundation/components/parbase/img.GET.java. The reason you reference it this way is that there is no filesystem path to the image - it is stored in the repository not on the file system, and it requires a servlet or script to ge the binary from the repository and steam it.
Normally the way you'd construct this is to use the out of the box Image class - so look at /libs/foundation/components/image/image.jsp. Now this example assumes that your component where you loaded the image extends /libs/foundation/components/parbase. If it doesn't then you either have to change your sling:superResourceType to /libs/foundation/components/parbase or some other component that does exten /libs/foundation/components/parbase. -
Hi,
I would like to have a dual boot Solaris 10, Windows system. I want to create a partition such that both systems can read and write to it.
Any ideas?Best choice today is a FAT filesystem. It has limitations on file size and total size, but most operating systems can use it easily.
Darren -
How to determine the file system on Solaris
Friends,
How to determine which file system I have installed UFS or ZFS on Solaris
ThanksOther methods would include looking at the /etc/vfstab if it's in there or fstyp(1M):
System Administration Commands fstyp(1M)
NAME
fstyp - determine file system type
SYNOPSIS
fstyp [-a | -v] special [:logical-drive] -
Mounting the Root File System into RAM
Hi,
I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself. At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
I have now succeeded, at least it seems, in performing such a task. There are still some issues.
For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
A custom kernel hook is used to (overall):
Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
Copy the non-volatile content into the ramdisk.
Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
To integrate this hook into a initramfs, a preset is needed.
I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
Some prerequisites exist (by default):
Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
Referencing paths:
The variables:
source=
temporary=
...should have the same value in all of these files:
"/etc/initcpio/hooks/ram"
"/root/Documents/rootfs/unmount-root-fs.sh"
"/root/.rsync/exclude.txt" -- Should correspond.
This is needed to sync the RAM disk back to the hard disk.
I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
Here are all the accepted and used parameters:
Parameter Allowed Values Default Value Considered Values Description
root Default (UUID=+,/dev/disk/by-*/*) None Any string The source root
rootfstype Default of "-t <types>" of "mount" "auto" Any string The FS type of the source root.
rootflags Default of "-o <options>" of "mount" None Any string Options when mounting the source root.
ram Any string None "1" If this hook sould be run.
ramfstype Default of "-t <types>" of "mount" "auto" Any string The FS type of the RAM disk.
ramflags Default of "-o <options>" of "mount" "size=50%" Any string Options when mounting the RAM disk.
ramcleanup Any string None "0" If any left-overs should be cleaned.
ramcleanup_source Any string None "1" If the source root should be unmounted.
ram_transfer_tool cp,find,cpio,rsync,unison unison cp,find,cpio,rsync What tool to use to transfer the root into RAM.
ram_unison_fastcheck true,false,default,yes,no,auto "default" true,false,default,yes,no,auto Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
ramdisk_cache_use 0,1 None 0 If unison should use any available cache. Relevant if ram_transfer_tool=unison.
ramdisk_cache_update 0,1 None 0 If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
This is the basic setup.
Optionally:
I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
Of course, use what you need from the provided files.
Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
find+cpio: 2:10s (2:12s on slower hardware)
unison: 3:10s - 4:00s
cp: 4 minutes (31 minutes on slower hardware)
rsync: 4:40s (55 minutes on slower hardware)
Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
These are the remaining issues:
find+cpio option does not create any destination files.
(On some older hardware) When booting up, the source disk is not always detected.
When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
Inconveniences:
Unison needs to perform an update detection at each startup.
initramfs' ash does not parse wild characters to use "cp".
That's about what I can think of for now.
I will gladly try to answer any questions.
I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
Last edited by AGT (2014-05-20 23:21:45)How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
Last edited by AGT (2014-05-20 16:49:35) -
DB_UNIQUE_NAME vs DB_NAME in standby databases of ASM file systems
Question : Do we need to have the db_unique_name paramter set differently in standby database compared to db_name in standby .
Problem we are facing :
Here is the little background
Primary Server : SERVER1
db_name : VENKAT
db_unique_name : VENKAT
Standby server : SERVER2
db_name : VENKAT
db_unique_name : VENKAT_stb
Sever : Linux
Database Version: 11.20.3
File system type: ASM ( 11.2.0.3)
stanby type : Physical
Disk group names: Identical on both primary and standby servers
Data : +DATA_OP01027_128
FRA : +FRA_VENKAT_128
How datafiles are layed out on primary server:
sample datafile name location : +DATA_VENKAT_128/VENKAT/datafile/venkat.277.789579565
How standby was build : using Active duplicte command
Once we have the standby database build we have the datafiles created under this location
Sample datafile name location on standby server : +DATA_VENKAT_128/VENKAT_stb/datafile/venkat.280.789579597
with this we have learnt that the directory VENKAT_stb is getting created in standby ASM off the db_unique_name that
was given in the database , we have not seen this issue in the normal file system even we are using db_unique_name different that
the db_name in standby database .
Can you please help us how we can prevent this situation of having datafiles getting created under differnt direction in standby compared to prod.
Can you also let us know what impacts we might be having if we don't specify the db_unique_name different than db_name in standby database.
Hope this explains the problem what we are facing currently .
What steps i follwed to fix this issues :
I have db_unique_name set to the same name as db_name and when i did the restore all datafiles are in the identical location to prod standby server .
Note : We do fully understand the need for having the db_unique_name set different to db_name in standby db in standby and primary db's are residing on the same physical server .
Thanks
Venkatfirst of all, this is not an issue or problem
works as intended
Question : Do we need to have the db_unique_name paramter set differently in standby database compared to db_name in standby .yes
Sample datafile name location on standby server : +DATA_VENKAT_128/VENKAT_stb/datafile/venkat.280.789579597
with this we have learnt that the directory VENKAT_stb is getting created in standby ASM off the db_unique_name that
was given in the database , we have not seen this issue in the normal file system even we are using db_unique_name different that
the db_name in standby database .
Can you please help us how we can prevent this situation of having datafiles getting created under differnt direction in standby compared to prod.well, dont use OMF then
OMF format for datafiles in ASM is: +DISKGROUP/DB_UNIQUE_NAME/DATAFILE/TABLESPACE_NAME.FILE.INCARNATION
datafiles will be created this way no matter what you do
the difference is, that if you dont use OMF, there will be an alias created referencing the file, with the path you gave
for example:
OMF:
create tablespace test size 10M;
a datafile is created: +DATA_VENKAT_128/VENKAT/DATAFILE/test.280.789581212 (i wrote some random numbers here)
non-OMF:
create tablespace test datafile '+DATA_VENKAT_128/dummy/test01.dbf' size 10M;
what actually happens:
a datafile is created: +DATA_VENKAT_128/VENKAT/DATAFILE/test.280.789581212 (i wrote some random numbers here)
and an ASM alias is created: +DATA_VENKAT_128/dummy/test01.dbf
and this alias is used by the database
while OMF files have their specified path format, and their path (db_unique_name) and even name (numbers at the end) will change when duplicated, aliases dont necessarily do this
however this is just extra work and administration, OMF is your friend -
Removing file system from meta devices in solaris 10
hi,
I have created a file system on meta device in solaris 10 using below command
newfs /dev/md/rdsk/d110
now i want to remove file system to make the meta device free, required command to remove file system ?
Regards
ZeeshanThanks for your response , actually i have performed the below steps to release the space from mount point /u05 which i want to make the space as raw device so that i can use it for ASM , so my question is that how can i unformat the file system so that it can be a raw device.
umount /u05
metaclear d110
(Now i have the below two metadevices 500gb each which i want to use as raw device so that i can allocate it to ASM (Automatic Storage management). So how can we make it as raw device ??
/dev/dsk/emcpower17a
/dev/dsk/emcpower17a -
Dfc: Display file system space usage using graph and colors
Hi all,
I wrote a little tool, somewhat similar to df(1) which I named dfc.
To present it, nothing better than a screenshot (because of colors):
And there is a few options available (as of version 3.0.0):
Usage: dfc [OPTIONS(S)] [-c WHEN] [-e FORMAT] [-p FSNAME] [-q SORTBY] [-t FSTYPE]
[-u UNIT]
Available options:
-a print all mounted filesystem
-b do not show the graph bar
-c choose color mode. Read the manpage
for details
-d show used size
-e export to specified format. Read the manpage
for details
-f disable auto-adjust mode (force display)
-h print this message
-i info about inodes
-l only show information about locally mounted
file systems
-m use metric (SI unit)
-n do not print header
-o show mount flags
-p filter by file system name. Read the manpage
for details
-q sort the output. Read the manpage
for details
-s sum the total usage
-t filter by file system type. Read the manpage
for details
-T show filesystem type
-u choose the unit in which
to show the values. Read the manpage
for details
-v print program version
-w use a wider bar
-W wide filename (un truncate)
If you find it interesting, you may install it from the AUR: http://aur.archlinux.org/packages.php?ID=57770
(it is also available on the archlinuxfr repository for those who have it enabled).
For further explanations, there is a manpage or the wiki on the official website.
Here is the official website: http://projects.gw-computing.net/projects/dfc
If you encounter a bug (or several!), it would be nice to inform me. If you wish a new feature to be implemented, you can always ask me by sending me an email (you can find my email address in the manpage or on the official website).
Cheers,
Rolinh
Last edited by Rolinh (2012-05-31 00:36:48)bencahill wrote:There were the decently major changes (e.g. -t changing from 'don't show type' to 'filter by type'), but I suppose this is to be expected from such young software.
I know I changed the options a lot with 2.1.0 release. I thought it would be better to have -t for filtering and -T for printing the file system type so someone using the original df would not be surprised.
I'm sorry for the inconvenience. There should not be any changes like this one in the future though but I thought it was needed (especially because of the unit options).
bencahill wrote:
Anyway, I now cannot find any way of having colored output showing only some mounts (that aren't all the same type), without modifying the code.
Two suggestions:
1. Introduce a --color option like ls and grep (--color=WHEN, where WHEN is always,never,auto)
Ok, I'll implement this one for 2.2.0 release It'll be more like "-c always", "-c never" and "-c auto" (default) because I do not use long options but I think this would be OK, right?
bencahill wrote:2. Change -t to be able to filter multiple types (-t ext4,ext3,etc), and support negative matching (! -t tmpfs,devtmpfs,etc)
This was already planned for 2.2.0 release
bencahill wrote:Both of these would be awesome, if you have time. I've simply reverted for now.
This is what I would have suggested.
bencahill wrote:By the way, awesome software.
Thanks I'm glad you like it!
bencahill wrote:P.S. I'd already written this up before I noticed the part in your post about sending feature requests to your email. I decided to post it anyway, as I figured others could benefit from your answer as well. Please forgive me if this is not acceptable.
This is perfectly fine Moreover, I seem to have some troubles with my e-mail addressee... So it's actually better that you posted your requests here! -
Zfs destroy DOES NOT CHECK NFS mount file-systems
I asked this question twitter once and the answer was a good one, but I did some checking today and was surprised!!
# zfs destroy mypool/home/andrew
The above command will destroy this file-system no questioned asked but if the file-system is mounted you will get back the Device busy and if you have snapshot then they will be protected as well
server# zfs destroy mypool/home/andrew
cannot unmount 'tank/home/andrew
server# zfs destroy dpool/staff/margaret
cannot destroy 'dpool/staff/margaret': filesystem has children
use '-r' to destroy the following datasets:
dpool/staff/margaret@Wed18
dpool/staff/margaret@Wed22
BUT?
server# zfs destroy dpool/staff/margaret@Wed18
server# zfs destroy dpool/staff/margaret@Wed22
NFSclient# cd /home/margaret
NFSlient# ls -l
drwx------+ 2 margaret staff 2 Aug 29 17:06 Mail
lrwxrwxrwx 1 margaret staff 4 Aug 29 17:06 mail -> Mail
drwx--x--x+ 2 margaret staff 2 Aug 29 17:06 public_www
server# zfs destroy dpool/staff/margaret
server#
GONE!!!
I will file a bug report to see what Oracle say!
Comments?
I think there should be a hold/protect of file-systems
# zfs hold dpool/staff/margaret
AndrewThe CR is already filed:
6947584 zfs destroy should be recoverable or prevented
The zfs.1m man page, which covers the mounted case and the ZFS admin guide are pretty clear
about the current zfs destroy behavior.
http://docs.oracle.com/cd/E23824_01/html/821-1448/gamnq.html#gammq
Caution - No confirmation prompt appears with the destroy subcommand. Use it with extreme caution.
zfs destroy [-rRf] filesystem|volume
Destroys the given dataset. By default, the command
unshares any file systems that are currently shared,
unmounts any file systems that are currently mounted,
and refuses to destroy a dataset that has active depen-
dents (children or clones).
I'm sorry that you were surprised.
Accidents happen too, like destroying the wrong file system, so always have good backups.
Thanks, Cindy -
Hide a type of file System wide not just the extension*
Is it Possible & if So Howto hide a type of file System wide not just the extension*
Am trying to HIDE the Associated file for Photo Processing software in Finder, its a Specific file to the Program & holds the Raw data text Processed Info
. Would mean I would need to change each File individually
Is there a way/apple script?
Knowledge or experience would be Appreciated
ThanksI followed these instructions and they work, but I also get errors. Here are two files in the directire, and the contents of ~/.profile, I called the function killextension:
I run the command to hide the PDF:
Note the two errors. Thank you very much, BTW. No more Dropbox attributes files! -
Run operating system command for sender File adpter (NFS)
Hi All,
iam doing a file to RFC scenario, using 'Run operating system command' in sender file adapter to change the file name while archiving (after processing completed).
I mention OS command like this:
sample_server\scripts\Test\Rename.bat"
Rename.bat file calls a 'perl script' code.
when i run interface, could see below statement in adapter log ->
"Execute OS command "
sample_server\scripts\Test\Rename.bat"
but the script was not run and file name was not changed.
Please advice what could be the problem?
Does this mean script executed successfully?
Do i need install perl software on XI server, even perl script (.bat file) is executing on sample_server?
Thanks in advance..
Regards,
RajeshHi,
Just check the following URL and give it a try again :-
Executing Unix shell script using Operating System Command in XI
Hope this info Helps..
Regards,
Aditya
Maybe you are looking for
-
SQL Report not showing data - available in SQL Workshop and SQL Developer
I am having an issue with developing a SQL Report in APEX 3.2.1. I run the code in both SQL developer and SQL Workshop and I get data pulled back (both against my development environment). When I run the same code in a SQL Report region, it returns n
-
I installed version 3 - the drive to function calculates a route if I am online but returns Route Not Found if offline the previous version calculated routes perfectly offline - what is the problem please?
-
I bought a new apple ipad mini a week back and i found sound is muted in some applications where as songs, videos and youtube are working fine. please help me out.
-
Need to create ABAP proxy in ECC 6.0
Hi All, I need to create Abap proxy in ECC 6.0 .Tha actual senerio is iam getting data from XI in XML format and i need to validate and update the date in the ZEE table fields respectively.i need to Modularize the class and keep the validat
-
Can not Group text and shape/object
I have previously made frequent use of grouping text and objects or images together. But in Keynote 4.0.2 this is not possible (for me at least). I am able to lock multiple images, but the group, mask and alpha buttons are not active when selecting a