Physical storage Checks

Can any body forward me the PL/SQL commands to check Physical storage.
I want to check how much physical storage we have free,
Secondly , i want to check other storage like tablespace & any other storage we should keep checking for our db maintenance...
Neep suggestions

This is a SQL plus script that can be modified to be used on PLSQL, the script returns tablespace space used,free.
-- tablespace_size.sql
-- This SQL Plus script lists tablespace space used
-- Parameter:
-- &1: Display order by:
--      1:tablespace name
--     2:total mbytes
--     3:used Mb
--     4:free mb
--     5:% used
--     6:% free
SET lines 300
SET feedback off
SET verify off
SET pages 40
COLUMN pct_used format 999.9 heading "%|Used"
COLUMN pct_free format 999.9 heading "%|Free"
COLUMN name format a16 heading "Tablespace Name"
COLUMN mbytes format 999,999,999 heading "Total|MB"
COLUMN used format 999,999,999 heading "Used|MB"
COLUMN free format 999,999,999 heading "Free|MB"
COLUMN largest format 999,999,999 heading "Largest"
BREAK on report
COMPUTE sum of mbytes on report
COMPUTE sum of free on report
COMPUTE sum of used on report
PROMPT Order by:
PROMPT     1:tablespace name
PROMPT     2:total mbytes
PROMPT     3:used Mb
PROMPT 4:free mb
PROMPT     5:% used
PROMPT 6:% free
SELECT NVL (b.tablespace_name, NVL (a.tablespace_name, 'UNKOWN')) NAME,
mbytes_alloc mbytes, mbytes_alloc - NVL (mbytes_free, 0) used,
NVL (mbytes_free, 0) free,
((mbytes_alloc - NVL (mbytes_free, 0)) / mbytes_alloc)
* 100 pct_used,
100
- (((mbytes_alloc - NVL (mbytes_free, 0)) / mbytes_alloc) * 100)
pct_free
FROM (SELECT SUM (BYTES) / 1024 / 1024 mbytes_free, tablespace_name
FROM SYS.dba_free_space
GROUP BY tablespace_name) a,
(SELECT SUM (BYTES) / 1024 / 1024 mbytes_alloc, tablespace_name
FROM SYS.dba_data_files
GROUP BY tablespace_name) b
WHERE a.tablespace_name(+) = b.tablespace_name
UNION ALL
SELECT f.tablespace_name,
SUM (ROUND ((f.bytes_free + f.bytes_used) / 1024 / 1024, 2)
) "total MB",
SUM (ROUND (NVL (p.bytes_used, 0) / 1024 / 1024, 2)) "Used MB",
SUM (ROUND ( ((f.bytes_free + f.bytes_used) - NVL (p.bytes_used, 0)
/ 1024
/ 1024,
2
) "Free MB",
(SUM (ROUND (NVL (p.bytes_used, 0) / 1024 / 1024, 2)) * 100)
/ (SUM (ROUND ((f.bytes_free + f.bytes_used) / 1024 / 1024, 2))),
100
- (SUM (ROUND (NVL (p.bytes_used, 0) / 1024 / 1024, 2)) * 100)
/ (SUM (ROUND ((f.bytes_free + f.bytes_used) / 1024 / 1024, 2)))
FROM SYS.v_$temp_space_header f,
dba_temp_files d,
SYS.v_$temp_extent_pool p
WHERE f.tablespace_name(+) = d.tablespace_name AND f.file_id(+) = d.file_id
AND p.file_id(+) = d.file_id
GROUP BY f.tablespace_name
ORDER BY &1
/

Similar Messages

  • Physical storage Checks & adding disk on server

    Can any body guide me how i can check the overall tablesapce i am using & how much i am using physical space(disk space) on server .
    I am working on unix OS.

    Run this query to find space usage per tablespace:
    select c.tablespace_name as "TABLESPACE",
    c.extent_management "EXTENT MANAGEMENT",
    c.contents,
    c.logging,
    c.allocation_type,
    ROUND((b.total_space_available/1048576),2) AS "SIZE MB",
    ROUND((a.free_space/1048576),2) as "FREE MB",
    ROUND((b.total_space_available/1048576)-(a.free_space/1048576),2) as "USED MB",
    ROUND(100 - (((a.free_space/1048576)/(b.total_space_available/1048576))*100),1)||' %' as "% USED"
    from (select sum(bytes) as free_space,
    tablespace_name
    from dba_free_space -- Free space in permanent tablespace
    group by tablespace_name
         UNION ALL
         select     sum(bytes_free) as free_space,
    tablespace_name
         from v$temp_space_header -- Free space in temporary tablespaces
         group by tablespace_name) a,     -- Calculate the free space that each tablespace has
    (select sum(user_bytes) as total_space_available,
    tablespace_name
    from dba_data_files -- Total space in permanent tablespaces
    group by tablespace_name
         UNION ALL
    select     sum(user_bytes) as total_space_available,
    tablespace_name
    from dba_temp_files -- Total space in temporary tablespaces
    group by tablespace_name) b,     -- Calculate the total size of each TS
    dba_tablespaces c
    where a.tablespace_name (+) = b.tablespace_name and -- show TS that are full so have no entries in dba_free_space
    b.tablespace_name = c.tablespace_name
    order by ROUND(100 - (((a.free_space/1048576)/(b.total_space_available/1048576))*100),1) DESC;
    HTH
    Message was edited by:
    Sanjeev

  • SDDM 4.1 - Physical Storage Properties Not Appearing in DDL

    I have only had this problem while using 4.1, not 4.0. In Tools > Preferences > Data Modeler > DDL if have the "Include Storage in DDL" box checked, and I have a Storage Template assigned to my table in my physical model. However, when I preview my table's DDL, I do not see any of the physical storage properties included. This has not always been the case in 4.1. The first time I noticed that the storage properties did not appear, I closed SDDM and reopened it. This fixed my problem temporarily, but now I cannot get the storage properties to appear at all. Is this a bug with SDDM 4.1? Or am I doing something wrong? I did not have this problem in 4.0.

    Hi,
    A few things to check:
    1. The Physical Model that will be used in the DDL Preview is the one shown on the RDBMS Site property on the General tab of the Model Properties dialog for the Relational Model.
    (Right click on the entry for the Relational Model in the Browser and select Properties.)
    This gets updated whenever you do a full DDL Generation to refer to the Physical Model used in that DDL Generation.  (It's initial default value for a new Relational Model is Oracle Database 11g.)
    Make sure that this refers to the relevant Physical Model.  Update it if necessary.
    2. The relevant Physical Model must be open when you do the DDL Preview.
    3. As well as the "Include Storage in DDL" option there are additional options for each individual storage clause.
       These appear on the Data Modeler > DDL > DDL/Storage page of the Preferences.  The relevant options here need to be checked as well.
    If there still seems to be a problem, please let us know, together with any extra details you can provide.
    Thanks,
    David

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • Issue with the storage check if files are uploaded...

    There is an issue with (photo) updloads to SkyDrive filling the temporary items. When I started to upload photos to Skydrive, I had about 2.4 GB free space on my Lumia 820 and after a while the whole memory was full. I uploaded successfully about 600MB of photos, but the same time the phone filled the temporary items to 2.4GB and thus went to full stop with all memory gone. And I haven't had any success of freeing up the temporary items.
    Basically, whatever I upload to the Skydrive it goes to the temp files and that cannot be cleared using Storage check option.
    Can you please help me on this?

    cjlim wrote:
    No worries with the 925. It comes with at least 16gb internal storage. Have a good holiday.
    16GB without a micro-SD card slot isn't enough for many users especially those who have a large music collection. (Note: Cloud storage is NOT equivalent  to local storage). In any case 16GB will only postpone but not get rid of the problem of the 'Other' folder.

  • Lumia 620 storage check not working

    So i run this storage check yesterday. It keeps just loading loading. I uninstalled all apps. Then also it is not working. This is big issue for me. Please help.

    Storage check was actually just updated. The official version ow is the same as the beta 3 version minus the maps move option (as explained here)
    Click on the blue Star Icon below if my advice has helped you or press the 'Accept As Solution' link if I solved your problem..

  • Shared Storage Check

    Hi all,
    We are planning to add a node to our existing RAC deployment (Database: 10gr2 and Sun Solaris 5.9 OS). Currently the shared storage is IBM SAN.
    When i run shared storage check using cluvfy, it fails to detect any shared storage. Given that i can ignore this error message (since cluvfy doesn't work wth SAN i beleive), how can i check whether the storage is shared or not?
    Note
    When i see partition table from both servers, it looks same (for the SAN drive, of course) but the name/label of the storages are different (For example: In existing node it show c6t0d0 but in the new node, which is to be added, it shows something different. Is it ok?).
    regards,
    Muhammad Riaz

    Never mind. I found solution from http://www.idevelopment.info.
    (1) Create following directory structure on second node (same as first node) with the same permissions on existins node:
    /asmdisks
    - crs
    -disk1
    -disk2
    - vote
    (2) use ls -lL /dev/rdsk/<Disk> to find out major and minor ids of shared disk and attach those ids to relveant direcotries above using mknod command:
    # ls -lL /dev/rdsk/c4t0d0*
    crw-r-----   1 root     sys       32,256 Aug  1 11:16 /dev/rdsk/c4t0d0s0
    crw-r-----   1 root     sys       32,257 Aug  1 11:16 /dev/rdsk/c4t0d0s1
    crw-r-----   1 root     sys       32,258 Aug  1 11:16 /dev/rdsk/c4t0d0s2
    crw-r-----   1 root     sys       32,259 Aug  1 11:16 /dev/rdsk/c4t0d0s3
    crw-r-----   1 root     sys       32,260 Aug  1 11:16 /dev/rdsk/c4t0d0s4
    crw-r-----   1 root     sys       32,261 Aug  1 11:16 /dev/rdsk/c4t0d0s5
    crw-r-----   1 root     sys       32,262 Aug  1 11:16 /dev/rdsk/c4t0d0s6
    crw-r-----   1 root     sys       32,263 Aug  1 11:16 /dev/rdsk/c4t0d0s7
    mknod /asmdisks/crs      c 32 257
    mknod /asmdisks/disk1      c 32 260
    mknod /asmdisks/disk2      c 32 261
    mknod /asmdisks/vote      c 32 259
    # ls -lL /asmdisks
    total 0
    crw-r--r--   1 root     oinstall  32,257 Aug  3 09:07 crs
    crw-r--r--   1 oracle   dba       32,260 Aug  3 09:08 disk1
    crw-r--r--   1 oracle   dba       32,261 Aug  3 09:08 disk2
    crw-r--r--   1 oracle   oinstall  32,259 Aug  3 09:08 vote

  • Oracle 9i - Physical Storage

    Hello,
    I am new to Oracle and I need to find a way to move the physical storage of an oracle installation to a NAS device.
    Currenty I have an oracle 91 instance running on a linux box.. unfortunately this box is running out of space..so I need to move only the physical storage to the NAS device..(Dell PowerVault NAS) I would still manage the oracle instance only thru my linux box.. this way I can store more data on my oracle instance
    any help on this highly appreciated!
    Vanniarajan

    Leave the Oracle software on Linux. Your instance will continue to be hosted on Linux. All you would be doing is moving the dbf from the linux box to the NAS. NAS contains a file system which is created by the vendor's software. You do not install any OS on it. Make sure you have a valid backup before moving, just in case.
    For example, we have a NetApp SAN box here. All the dbf of my 8i and 9i databases are on it and I have a Windows 2000 Server hosting the 8i instances and a Windows 2003 server hosting the 9i databases.

  • Exchange 2010 "event id 403/401 The physical consistency check" Backups does not work...Critical!!

    Hi,
    At this moment our backups does not work. We have two error on the event viewer;
    Exchange 2010 sp3 version 14.03.0158.001
    We have only one server Exchange 2010 with two databases for users...
    Event id 401;
    Instance 1: The physical consistency check has completed, but one or more errors were detected. The consistency check has terminated with error code of -106 (0xffffff96).
    Event id 403;
    Instance 1: The physical consistency check successfully validated 629126 out of 1110272 pages of database '\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy33\exchangedb\global\global.edb'. Because some database pages were either not validated or failed validation,
    the consistency check has been considered unsuccessful.

    Hi Javier,
    From your error description, I recommend you do the following steps for troubleshooting:
    1. Please run CHKDSK and restart server.
    2. Run a Database level Mailbox repair request.
    What's more, here are some useful threads for your reference.
    Create a Mailbox Repair Request
    http://technet.microsoft.com/en-us/library/ff625221(v=exchg.141).aspx
    Database Backup Failing with Event ID 2007, 9782, 401, 403, 254
    http://social.technet.microsoft.com/Forums/en-US/4c8eccbf-435a-43ef-b3f6-0de5096413ee/database-backup-failing-with-event-id-2007-9782-401-403-254?forum=exchangesvravailabilityandisasterrecoverylegacy
    Hope it helps.
    If you need further assistance, please feel free to let me know.
    Best regards,
    Amy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • New Storage Check setting not working - Lumia 920

    I received the new update yesterday but the new "Storage Check" doesn't work.  I see it in settings but when I click on it  it looks like it is going to do something (page flip animation) but then it goes right back to the settings.  Does anyone have this issue or know how to fix it?  I have done a soft reset and turned it off a few times.  I do not want to do a hard reset because I recently did this and I have the phone the way I like it.

    Guys, I'm form India.
    Just updated my Lumia 920 to new firmware version 1308.0001
    As reported by others, I have the same problem. My ''storage check'' option is not working at all!
    It keeps on loading.
    What to do?? Please help!
    Attaching a screen shot.
    Attachments:
    wp_ss_20130409_0002.png ‏110 KB
    wp_ss_20130409_0003.png ‏100 KB

  • Physical storage types

    I have  a question....
    are the interim storage areas physical storage areas or just dummy areas we use to update the stock when GI and GR happens and to clear differences.... I know tht putwaway strats and picking strats are not defined for these/... So I am thinking these are just dummy storage areas we define to handle Negative stocks etc... Am I wrong... Please correct me

    Hello
    The interim storage types are physical or logical depending upon the client business process.
    Normally big warehouses having these interim storage types and the goods will unloaded and then sent to inpsection or they will repacking or skidding or lebelling will be done and then transferred to the storage bins. Similarly goods will be picked from different storage type to interim storage location and then delivered to teh customer.
    Warm regards
    Ramarkishna

  • Physical  storage  for infoobject

    Hi
       Could you tell me  what  is physical  storage in infoobject ? especially one infoobject(name is A) belong to another (name is B) as a  attribute, in this case , physically where are data of A?  Did it storage in the table in the infoobject B owning attributes?   or data is storage in sid table by A itself and when we see the masterdata of b ,the system join all the  sid table which blong to respective attibutes to show masterdata?

    Hi Guixin,
    Info-object A can also have its own attributes. The concept of having depends basically on the data modelling and the data dependency. An Employee can have name, address, phone number, department as its attributes. Similarly department will have attributes like location, etc...
    But each info-object will have its own text.
    SID's are created by the BW system themselves. If you are looking for data in the backend then look for "M" table. It would be /BIC/M<name of the info-object> or /BI0/M<name of the info-object>.
    Bye
    Dinesh

  • Runcluvfy.bat comp ssa - Shared storage check failed

    I've run the cluvfy on Windows 2003 64bits on SAN with 3 nodes and found that it's unsuccessful for the shared storage checking. (whereas it's successful on Windows2003 32 bits.)
    C:\Documents and Settings\Administrator>C:\_Ly1\102010_win64_x64_clusterware\clusterware\cluvfy\runcluvfy.bat comp ssa -n rac1,rac2,rac3
    The system cannot find the file specified.
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    Shared storage check failed on nodes "rac2,rac1,rac3".
    Verification of shared storage accessibility was unsuccessful on all the nodes.
    C:\Documents and Settings\Administrator>
    I'm not sure that it may result in the Clusterware Installation fail or not.
    Here, I captured the failure screen :
    http://alkaspace.com/is.php?i=30223&img=clusterware-fai.jpg
    Please help me out. Thank you!!

    I just ran into this myself while building an enterprise system on Win Server 2003. The answers here did not sit well with me and I needed to be sure of the shared storage prior to proceeding further. Researching Metalink did not uncover any relevant information either. I then opened an SR with Oracle and I did get back a satisfactory response which allowed me to verify my shared storage. The entire text of their solution can be found at http://www.webofwood.com/rac/oracle-response-to-shared-storage-check-failed-on-nodes/. Basically, it is a method of using another utility to identify the storage device names used by Windows and then writing and reading to them from each node to verify each node 'sees' what is written by the other node(s). If this check is successful, you can then proceed.

  • About DBD XML physical storage xml document

    Recently I survey about BDB XML,
    and I want to know how is it to storage the XML document, what format?? is like Natix storage like in page file ?? So if there are not index can be utilize, the system will use tree travel method to get query answer.
    Or there are any document or technical manual support to introduction how DBD XML system internal storage format,how query process, how index build ....etc, because I ready some aboubt document but it major explain how to use the system (like API introduction).
    Thanks very much !!

    Hi Henry,
    The physical nodes store a large amount of information in a record, including it's node ID, it's parent's node ID, it's level in the tree and the node ID of it's last descendant.
    Ancestor-descendant relationships can be calculated using the node ID and last descendant ID as upper and lower bounds. Parent-child relationships additionally use the node level information. Sibling relationships need to use the parent's ID to check they have the same parent.
    Navigation, on the other hand, uses other node IDs stored in the physical node, or implicit information. For instance if a node has children, it's first child is always the next node record stored. The last child ID is stored in the physical node, since this cannot be similarly calculated, as are the next and previous sibling node IDs.
    If you are interested look in dbxml/src/dbxml/nodeStore/NsFormat.(hpp|cpp), which contains the marshaling code for the node storage format.
    John

  • Can I assign several physical storage locations for each virtual machine when using the replication-feature from Hyper-V 2012 R2?

    Hi everyone,
    I have 2x physical servers running Hyper-V 2012 R2. Each hosts several virtual machines. The VHDs of the VMs are stored on several dedicated physical disks to have a performance boost. For exampe if VM A has two VHDs attached I made sure that the VHDs are
    on different physical disks to have them not slow-down each other in case of intensive disk accesses.
    So far so good. I was looking forward to the replication-feature. The idea is to have the two physical servers have their primary running VMs being replicated to the other physical server and vice-versa. I was hoping to have the chance to choose for each
    individual VM where the replicated VHD will be stored. But instead I can only see the one location/path which is configured in Hyper-V Manager when I activate the replication-feature on the server.
    Is there by any chance a way how to select the storage location for each VHD/VM if using the replication-feature of Hyper-V 2012 R2?
    Thanks in advance.
    Cheers,
    Sebastian

    Secondly, you could replicate different VMs to different storage locations to perform some of the disk balancing you are trying to perform.  Lastly, you could copy the vhd file to a different location
    before starting the VM.
    .:|:.:|:. tim
    Hi Tim,
    thanks for the reply. Sorry, but I had some other tasks to take care of, so I wasn't paying enough attention to this thread.
    The part I quoted from your reply sounds exactly like the action I'd like to perform, but as you pointed out before this should not be possible.
    How can I perform the action (replicating each VM to a storage location) as you mentioned secondly? To sum it up again:
    2x physical machines carrying severel HDDs
    8+ VMs spread to run on the 2x servers
    when setting up replication I can only set the storage-location from server A on B and vice versa B on A
    Thanks again for your reply.
    Cheers,
    Sebastian

Maybe you are looking for

  • Template that worked in CS6 is not working in CC

    I moved to Dreamweaver CC from CS6. In CS6 I created a template that I used throughout the site.  In the past it has worked fine. I upgraded to CC. I made a change to the template and CC tells me I only have one file to update.  I have thousands. I c

  • Automatic Date Change

    we added an automatically updated date at the bottom of the page to show users how often the website is updated and now the date shows up not only in small script at the bottom of the page, but in large script underneith the table that we use to orga

  • FCKEditor

    We have an application in which users enter data into five separate text boxes. We have two options - one page uses the FCKEditor, the other page uses the Textarea with HTML Editor. Otherwise they are exactly the same. We have the option because init

  • Speed Up Statement

    I have a table with columns 'data', 'data_type', 'date', 'time'. There are a limited number of 'data_type'. I need to get the most recent 'data' with respect to 'date' and 'time' for earch 'data_type'. My statement (w/ 1000 maximum rows returned): SE

  • WebGui and ESS/MSS

    Hello everyone,    I am new EP, I had been assigned the task of configuring WEB GUI and ESS and MSS. My Portal Version is 6.0 and backend is ECC 5.0 .Can any one guide me in this regard. Regards, Priya