Managing Data Volumes in MaxDB

Hello,
Due to an upgrade from ERP 6.0 to ERP 6.0 EHP5, I had to add new data volumes to my MAXDB database.
I created 6 data volumes of 9.5GB and 2 data volumes of 38GB.
i am not too worried about the 9.5 GB data volumes since they are close to 50% free.
However the 38GB data volumes are filled to only around 13%. This consumes a lot a space on my server.
Is there anyway to reorganize the database? I need to free-up some space on the server.
I am new to MaxDB.
Thanks,
Suhan Hegde

Hello Suhan Hegde,
1. Please see the documents "Deleting Data Volumes" and "Volumes (Permanent Storage) " in MAXDB library at
http://maxdb.sap.com/doc/7_8/44/d77a6368113ee3e10000000a114a6b/content.htm
2.
As you need to free-up some space on the server, please check that you have the permanent dataarea usage less as 9.5*6 GB first. Create the complete databackup, just to be safe. Then delete the datavolumes of the size 38GB when database is online,  the data from the specified data volumeswill be distributed to the remaining data volumes. More details in recommended document  "Deleting Data Volumes", see 1., or you could use db_deletevolume, see document at
http://maxdb.sap.com/doc/7_8/44/eefb7ab942108ee10000000a11466f/content.htm
or
you could do it using backup/restore procedure steps: create complete backup, initialize instance for restore and change the datavolumes configuration, then continue with restore.
3. If you are SAP customer => Please review SAP notes:
          SAP Note No. 1173395
          SAP note No. 1423732 < see 13. >
4. There are online traning sessions at
     http://maxdb.sap.com/training/
Regards, Natalia Khlopina

Similar Messages

  • MaxDB  (7.8.02.27) installation error: Invalid parameter Data Volumes

    Hi all,
    i get errors during installation procedure of MaxDB 7.8.02.27 at the point where i define the Database Volume paths (step 4b).
    If i use the default values the database gets created without errors.
    But if i do changes e.g. to the size of the data volume, the error appears when i click next:
    "Invalid value for data volume size: data size of 0KB does not make sense Specify useful sizes for your log volumes".
    If i create 2 data files with different names (DISKD0001, DISKD0002), i get an error mesage that i have used one filename twice.
    Now its getting strange: If i use the previous button to move one step back and then use the next button again, it sometimes
    accepts the settings and i´m able to start the installation and the database gets created.
    I´m remote on a VMWare server 2008 R2 (EN) and i´m using the x64 package of MaxDB.
    Any ideas?
    Thanks
    Martin Schneider

    Hi Martin,
    A general system error occurrs if file *.vmdk is larger than the maximum size supported ... It has to be replaced with the nearest acceptable value associated with the various block sizes so that  you can use to create a datastore.
    You may need to resize your block size while choosing VMFS datastore.
    Hope this is useful.
    Regards,
    Deepak Kori

  • MaxDB data volumes usage

    Hi all,
    I have the following configuration for my MaxDB (which is a part of a DMS system).
    Version is MaxDB 7.6
    Data volumes are:
    1) 2.GB
    2) 2.GB
    3) 10GB
    4) 10GB
    5) 10GB
    6) 3GB
    7) 3GB
    8) 3GB
    It's seems that for volumes 3, 4 & 5 (10GB size) only 4.5GB are used.
    My question is: why the free 5.5GB of the 10GB volumes is not used?
    Is it a configuration issue or does MaxDB determine where to write automatically?
    I don't see any performance problems.
    Thanks,
    Omri

    > I have the following configuration for my MaxDB (which is a part of a DMS system).
    > Version is MaxDB 7.6
    >
    > Data volumes are:
    > 1) 2.GB
    > 2) 2.GB
    > 3) 10GB
    > 4) 10GB
    > 5) 10GB
    > 6) 3GB
    > 7) 3GB
    > 8) 3GB
    >
    > It's seems that for volumes 3, 4 & 5 (10GB size) only 4.5GB are used.
    They will be used - just put more data into your content server (?).
    > My question is: why the free 5.5GB of the 10GB volumes is not used?
    > Is it a configuration issue or does MaxDB determine where to write automatically?
    No, there is nothing wrong here.
    MaxDB choses the datavolumes to save changed/new pages in a order based on their relative filling.
    Nevertheless as long as there is freespace in a volume it is possible that this will get used as well.
    Maybe you're thinking of a feature that existed until SAP 7.4 where the database would even out the filling degree in times of low activity. This feature is not present anymore - instead changed pages are relocated during savepoints.
    > I don't see any performance problems.
    Why should you? Content server usually does not put the same level of traffic to the database as a OLTP system would. For that you can use different sized data volumes without any performance drops.
    regards,
    Lars

  • Moving maxdb data volumes

    Hello experts
    We have added a SAN storage to our server and no we want to  move the data volumes to the SAN is there a way to do it which i shorter than migration?and are there any parameters that we will need to change to point to the new data location
    your ideas will be welcome

    Hi Sanjay,
    I was able to find the blog just by using the SCN search...
    http://scn.sap.com/community/maxdb/blog/2008/09/11/questions-to-sap-support-how-to-move-maxdb-volumes
    Have fun,
    Lars

  • Performance: How to manage large reports with high data volume

    Hi everybody,
    we actually make some tests on our BO server system, to define limitations and oppertunities. Among other things we constructed a large report with a high data volume (about 250.000 data records).
    When executing the query in SAP Query Designer it takes about 10 minutes to display it. In Crystal Reports we rebult the row and column structure of the query. The data retrieval in Crystal Reports Designer last about 9 minutes - even faster as in the query.
    Unfortunately in BO InfoView the report is not displayed. After 30 minutes of loading time we get a timeout error RCIRAS0244.
    com.crystaldecisions.sdk.occa.managedreports.ras.internal.ManagedRASException:
    Cannot open report document. ---
    The request timed out because there has been no reply from the server for 600.000 milliseconds.
    Also a refresh of an report with saved data is not possible.
    Now we are asking us some questions:
    1. Where can we set the timeout for InfoView to a value larger than 30 minutes?
    2. Why is InfoView so slow compared to Crystal Designer? Where is the bottleneck?
    3. Whats the impact of SAP single sign-on compared to Enterprise logon on the performance?
    Thanks for any helps and comments!
    Sebastian

    Hi Ingo,
    thank you for your reply.
    I will check the servers and maybe change the time limits.
    Unfortunately we have a quite slow server system that probably cause this timeout. In CR Designer we have no problems, its really quick. Is it to expect that CR Designer and InfoView have almost the same performance?
    Another nice point: When we execute the query in SAP BEx Query Designer it takes about 10 minutes to open it, in Crystal Designer it needs just about 5-6 minutes. We integrated exactly the same fields in the report, which exist in die SAP BEx query.
    What may cause the difference?
    - Exceptions and conditions in the query?
    - Free characteristics in the query?
    - anything else?
    Best regards,
    Sebastian

  • Errors during install of Solution Manager 7.0 w/MaxDB (central)

    Hi
    I am trying to install Solution Manager 7.0 w/MaxDB on a linux box. I have reached phase 47/50 and am running into errors that say:
    ERROR 2011-07-14 16:29:54.609
    CJS-30151  Java process server0 of instance OSM/DVEBMGS00 [ABAP: ACTIVE, Java: (dispatcher: RUNNING, server0: UNKNOWN)] did not start after 1:30 minutes. Giving up.
    ERROR 2011-07-14 16:29:54.757
    FCO-00011  The step startJava with step key |NW_Onehost|ind|ind|ind|ind|0|0|SAP_Software_Features_Configuration|ind|ind|ind|ind|6|0|NW_Call_Offline_CTC|ind|ind|ind|ind|7|0|startJava was executed with status ERROR ( Last error reported by the step :Java process server0 of instance OSM/DVEBMGS00 [ABAP: ACTIVE, Java: (dispatcher: RUNNING, server0: UNKNOWN)] did not start after 1:30 minutes. Giving up.).
    Here is the java version I am using (JVM 4.1)
    atl-osm-01:osmadm 80> java -version
    java version "1.4.2_30"
    Java(TM) 2 Runtime Environment, Standard Edition (build 4.1.009)
    SAP Java Server VM (build 4.1.009 17.0-b16, May 27 2011 00:12:32 - 41_REL - optU - linux amd64 - 6 - bas2:154157 (mixed mode))
    in the dev_jcontrol log I see:
    -> lib path = LD_LIBRARY_PATH=/opt/java/sapjvm_4/jre/lib/amd64/server:/opt/java/sapjvm_4/jre/lib/amd64:/opt/java/sapjvm_4/jre/../lib/amd64:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/tmp/sapinst_exe.26924.1310559612:/usr/sap/OSM/SYS/exe/run:/sapdb/programs/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib:/usr/sap/OSM/DVEBMGS00/j2ee/os_libs:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/SYS/exe/run:/sapdb/programs/lib
    -> exe path = PATH=/opt/java/java/bin:/usr/sap/OSM/DVEBMGS00/j2ee/os_libs:/sapdb/programs/bin:/opt/java/java/bin:.:/home/osmadm:/usr/sap/OSM/SYS/exe/run:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/dell/srvadmin/bin
    [Thr 47171544096752] JStartupICreateProcess: fork process (pid 5857)
    [Thr 47171544096752] JControlICheckProcessList: process server0 started (PID:5857)
    [Thr 47171544096752] Thu Jul 14 16:29:19 2011
    [Thr 47171544096752] JControlICheckProcessList: process server0 (pid:5857) died (RUN-FLAG)
    [Thr 47171544096752] JControlIResetProcess: reset process server0
    [Thr 47171544096752] JControlIResetProcess: [server0] not running -> increase error count (4)
    [Thr 47171544096752] JControlICheckProcessList: running flight recorder:
            /opt/java/java/bin/java -classpath ../j2ee/cluster/bootstrap/sap.comtcbloffline_launcherimpl.jar com.sap.engine.offline.OfflineToolStart com.sap.engine.flightrecorder.core.Collector ../j2ee/cluster/bootstrap -node ID3888650 1310675354 -bz /usr/sap/OSM/SYS/global
    In the dev_w12 trace log I see:
    M  *****************************************************************************
    M  *
    M  *  LOCATION    SAP-Gateway on host atl-osm-01 / sapgw00
    M  *  ERROR       program SLD_UC not registered
    M  *
    M  *  TIME        Thu Jul 14 16:25:52 2011
    M  *  RELEASE     700
    M  *  COMPONENT   SAP-Gateway
    M  *  VERSION     2
    M  *  RC          679
    M  *  MODULE      gwr3cpic.c
    M  *  LINE        1778
    M  *  DETAIL      TP SLD_UC not registered
    M  *  COUNTER     3
    M  *
    M  *****************************************************************************
    M
    A  RFC 1485  CONVID 80159271
    A   * CMRC=2 DATA=0 STATUS=0 SAPRC=679 ThSAPOCMINIT
    A  RFC> ABAP Programm: RSLDAGDS (Transaction: )
    A  RFC> User: DDIC (Client: 001)
    A  RFC> Destination: SLD_UC (handle: 1, , )
    A  *** ERROR => RFC ======> CPIC-CALL: 'ThSAPOCMINIT' : cmRc=2 thRc=679
    Transaction program not registered
    [abrfcio.c    8141]
    A  *** ERROR => RFC Error RFCIO_ERROR_SYSERROR in abrfcpic.c : 1501
    CPIC-CALL: 'ThSAPOCMINIT' : cmRc=2 thRc=679
    Transaction program not registered
    DEST =SLD_UC
    HOST =%%RFCSERVER%%
    PROG =SLD_UC
    GWHOST =atl-osm-01
    GWSERV =sapgw00
    [abrfcio.c    8141]
    A  TH VERBOSE LEVEL FULL
    A  ** RABAX: end RX_GET_MESSAGE
    B  table logging switched off for all clients
    S  handle memory type is RSTSPROMMM
    Any ideas on how I can fix this? Anything else that I should post that will help with analyzing the issue?
    Thanks in advance!

    Hi Sunny -
    Thank you for your quick response. I will have to ask a beginner's question now, but how do I restart the system manually? Is that just the startsap/stopsap scripts that I need to run?
    How would I know if the server0 is actually up and running after restarting?
    Also, Here is the dev_server0 log (I am posting snippets of the logs as they repeat):
      1
       2 -
       3 trc file: "/usr/sap/OSM/DVEBMGS00/work/dev_server0", trc level: 1, release: "700"
       4 -
       5 node name   : ID3888650
       6 pid         : 5639
       7 system name : OSM
       8 system nr.  : 00
       9 started at  : Thu Jul 14 16:28:59 2011
      10 arguments         :
      11           arg[00] : /usr/sap/OSM/DVEBMGS00/exe/jlaunch
      12           arg[01] : pf=/usr/sap/OSM/SYS/profile/OSM_DVEBMGS00_atl-osm-01
      13           arg[02] : -DSAPINFO=OSM_00_server
      14           arg[03] : pf=/usr/sap/OSM/SYS/profile/OSM_DVEBMGS00_atl-osm-01
      15           arg[04] : -DSAPSTART=1
      16           arg[05] : -DCONNECT_PORT=60140
      17           arg[06] : -DSAPSYSTEM=00
      18           arg[07] : -DSAPSYSTEMNAME=OSM
      19           arg[08] : -DSAPMYNAME=atl-osm-01_OSM_00
      20           arg[09] : -DSAPPROFILE=/usr/sap/OSM/SYS/profile/OSM_DVEBMGS00_atl-osm-01
      21           arg[10] : -DFRFC_FALLBACK=ON
      22           arg[11] : -DFRFC_FALLBACK_HOST=localhost
      23
      24
      25 [Thr 47057802424304] Thu Jul 14 16:28:59 2011
      26 [Thr 47057802424304] *** WARNING => INFO: Unknown property [instance.box.number=OSMDVEBMGS00atl-osm-01] [jstartxx_mt. 841]
      27 [Thr 47057802424304] *** WARNING => INFO: Unknown property [instance.en.host=atl-osm-01] [jstartxx_mt. 841]
      28 [Thr 47057802424304] *** WARNING => INFO: Unknown property [instance.en.port=3201] [jstartxx_mt. 841]
      29 [Thr 47057802424304] *** WARNING => INFO: Unknown property [instance.system.id=0] [jstartxx_mt. 841]
      30
      31 **********************************************************************
      32 JStartupReadInstanceProperties: read instance properties [/usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties]
      33 -> ms host    : atl-osm-01
      34 -> ms port    : 3901
      35 -> OS libs    : /usr/sap/OSM/DVEBMGS00/j2ee/os_libs
      36 -> Admin URL  :
      37 -> run mode   : NORMAL
      38 -> run action : NONE
      39 -> enabled    : yes
      40 **********************************************************************
      41
      42
      43 **********************************************************************
      44 Used property files
      45 -> files [00] : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties
      46 **********************************************************************
      46 **********************************************************************
      47
      48 **********************************************************************
      49 Instance properties
      50 -> ms host    : atl-osm-01
      51 -> ms port    : 3901
      52 -> os libs    : /usr/sap/OSM/DVEBMGS00/j2ee/os_libs
      53 -> admin URL  :
      54 -> run mode   : NORMAL
      55 -> run action : NONE
      56 -> enabled    : yes
      57 **********************************************************************
      58
      59 **********************************************************************
      60 Bootstrap nodes
      61 -> [00] bootstrap            : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties
      62 -> [01] bootstrap_ID3888600  : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties
      63 -> [02] bootstrap_ID3888650  : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties
      64 **********************************************************************
      65
      66 **********************************************************************
      67 Worker nodes
      68 -> [00] ID3888600            : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties
      69 -> [01] ID3888650            : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/instance.properties
      70 **********************************************************************
      71
      72 [Thr 47057802424304] JLaunchRequestQueueInit: create named pipe for ipc
      73 [Thr 47057802424304] JLaunchRequestQueueInit: create pipe listener thread
      74 [Thr 1115011392] WaitSyncSemThread: Thread 1115011392 started as semaphore monitor thread.
      75 [Thr 1104521536] JLaunchRequestFunc: Thread 1104521536 started as listener thread for np messages.
      76 [Thr 47057802424304] SigISetDefaultAction : default handling for signal 17
      77
      78 [Thr 47057802424304] Thu Jul 14 16:29:00 2011
      79 [Thr 47057802424304] NiInit3: NI already initialized; param 'maxHandles' ignored (1;202)
      80 [Thr 47057802424304] CPIC (version=700.2006.09.13)
      81 [Thr 47057802424304] [Node: server0] java home is set by profile parameter
      82         Java Home: /opt/java/java
      83 [Thr 47057802424304] JStartupICheckFrameworkPackage: can't find framework package /usr/sap/OSM/DVEBMGS00/exe/jvmx.jar
      84
      85 **********************************************************************
      86 JStartupIReadSection: read node properties [ID3888650]
      87 -> node name          : server0
      88 -> node type          : server
      89 -> node execute       : yes
      90 -> jlaunch parameters :
      91 -> java path          : /opt/java/java
      92 -> java parameters    : -verbose:gc -Xtrace -Djava.security.policy=./java.policy -Djava.security.egd=file:/dev/urandom -Dorg.omg.CORBA.ORBClass=com.sap.     engine.system.ORBProxy -Dorg.omg.CORBA.ORBSingletonClass=com.sap.engine.system.ORBSingletonProxy -Djavax.rmi.CORBA.PortableRemoteObjectClass=com.sap.eng     ine.system.PortableRemoteObjectProxy -Djco.jarm=1 -Xmn400m -XX:PermSize=2048m -XX:MaxPermSize=2048m -Dorg.omg.PortableInterceptor.ORBInitializerClass.co     m.sap.engine.services.ts.jts.ots.PortableInterceptor.JTSInitializer
      93 -> java vm version    : 4.1.009 17.0-b16
      94 -> java vm vendor     : SAP Java Server VM (SAP AG)
      95 -> java vm type       : server
      96 -> java vm cpu        : amd64
      97 -> heap size          : 2048M
      98 -> init heap size     : 2048M
      99 -> root path          : /usr/sap/OSM/DVEBMGS00/j2ee/cluster/server0
    100 -> class path         : ./bin/boot/boot.jar:./bin/boot/jaas.jar:./bin/system/bytecode.jar:.
    101 -> OS libs path       : /usr/sap/OSM/DVEBMGS00/j2ee/os_libs
    102 -> main class         : com.sap.engine.boot.Start
    103 -> framework class    : com.sap.bc.proj.jstartup.JStartupFramework
    104 -> registr. class     : com.sap.bc.proj.jstartup.JStartupNatives
    105 -> framework path     : /usr/sap/OSM/DVEBMGS00/exe/jstartup.jar:/usr/sap/OSM/DVEBMGS00/exe/jvmx.jar
    106 -> shutdown class     : com.sap.engine.boot.Start
    107 -> parameters         :
    108 -> debuggable         : no
    109 -> debug mode         : no
    110 -> debug port         : 50021
    111 -> shutdown timeout   : 120000
    112 **********************************************************************
    113
    114 [Thr 47057802424304] JLaunchISetDebugMode: set debug mode [no]
    115 [Thr 1101179200] JLaunchIStartFunc: Thread 1101179200 started as Java VM thread.
    116
    117 **********************************************************************
    118 JHVM_LoadJavaVM: VM Arguments of node [server0]
    119 -> stack   : 1048576 Bytes
    120 -> arg[  0]: exit
    121 -> arg[  1]: abort
    122 -> arg[  2]: vfprintf
    123 -> arg[  3]: -verbose:gc
    124 -> arg[  4]: -Xtrace
    125 -> arg[  5]: -Djava.security.policy=./java.policy
    126 -> arg[  6]: -Djava.security.egd=file:/dev/urandom
    127 -> arg[  7]: -Dorg.omg.CORBA.ORBClass=com.sap.engine.system.ORBProxy
    128 -> arg[  8]: -Dorg.omg.CORBA.ORBSingletonClass=com.sap.engine.system.ORBSingletonProxy
    129 -> arg[  9]: -Djavax.rmi.CORBA.PortableRemoteObjectClass=com.sap.engine.system.PortableRemoteObjectProxy
    130 -> arg[ 10]: -Djco.jarm=1
    131 -> arg[ 11]: -Xmn400m
    132 -> arg[ 12]: -XX:PermSize=2048m
    133 -> arg[ 13]: -XX:MaxPermSize=2048m
    134 -> arg[ 14]: -Dorg.omg.PortableInterceptor.ORBInitializerClass.com.sap.engine.services.ts.jts.ots.PortableInterceptor.JTSInitializer
    135 -> arg[ 15]: -Dsys.global.dir=/usr/sap/OSM/SYS/global
    136 -> arg[ 16]: -Dapplication.home=/usr/sap/OSM/DVEBMGS00/exe
    137 -> arg[ 17]: -Djava.class.path=/usr/sap/OSM/DVEBMGS00/exe/jstartup.jar:/usr/sap/OSM/DVEBMGS00/exe/jvmx.jar:./bin/boot/boot.jar:./bin/boot/jaas.jar:./bin     /system/bytecode.jar:.
    138 -> arg[ 18]: -Djava.library.path=/opt/java/sapjvm_4/jre/lib/amd64/server:/opt/java/sapjvm_4/jre/lib/amd64:/opt/java/sapjvm_4/jre/../lib/amd64:/usr/sap/O     SM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/tmp/sapinst_exe.26924.1310559612:/usr/sap/OSM/SYS/exe/run:/sapdb/programs/lib:/u     sr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib:/usr/sap/OSM/DVEBMGS00/j2ee/os_libs:/usr/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/DVEBMGS00/exe:/us     r/sap/OSM/DVEBMGS00/exe:/usr/sap/OSM/SYS/exe/run:/sapdb/programs/lib
    139 -> arg[ 19]: -Dmemory.manager=2048M
    140 -> arg[ 20]: -Xmx2048M
    141 -> arg[ 21]: -Xms2048M
    142 -> arg[ 22]: -DLoadBalanceRestricted=no
    143 -> arg[ 23]: -Djstartup.mode=JCONTROL
    144 -> arg[ 24]: -Djstartup.ownProcessId=5639
    145 -> arg[ 25]: -Djstartup.ownHardwareId=H0868704103
    146 -> arg[ 26]: -Djstartup.whoami=server
    147 -> arg[ 27]: -Djstartup.debuggable=no
    148 -> arg[ 28]: -DSAPINFO=OSM_00_server
    149 -> arg[ 29]: -DSAPSTART=1
    150 -> arg[ 30]: -DCONNECT_PORT=60140
    151 -> arg[ 31]: -DSAPSYSTEM=00
    152 -> arg[ 32]: -DSAPSYSTEMNAME=OSM
    153 -> arg[ 33]: -DSAPMYNAME=atl-osm-01_OSM_00
    154 -> arg[ 34]: -DSAPPROFILE=/usr/sap/OSM/SYS/profile/OSM_DVEBMGS00_atl-osm-01
    155 -> arg[ 35]: -DFRFC_FALLBACK=ON
    156 -> arg[ 36]: -DFRFC_FALLBACK_HOST=localhost
    157 -> arg[ 37]: -DSAPSTARTUP=1
    158 -> arg[ 38]: -DSAPSYSTEM=00
    159 -> arg[ 39]: -DSAPSYSTEMNAME=OSM
    160 -> arg[ 40]: -DSAPMYNAME=atl-osm-01_OSM_00
    161 -> arg[ 41]: -DSAPDBHOST=atl-osm-01
    162 -> arg[ 42]: -Dj2ee.dbhost=atl-osm-01
    163 **********************************************************************

  • Converting data volume type from LINK to FILE on a Linux OS

    Dear experts,
    I am currently running MaxDB 7.7.04.29 on Red Hat Linux 5.1.  The file types for the data volumes were
    initially configured as type LINK and correspondingly made links at the OS level via "ln -s" command. 
    Now (at the OS level) we have replaced the link with the actual file and brought up MaxDB.  The system
    comes up fine without problems but I have a two part question:
    1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
        (might we encounter a performance problem).
    2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
    Your feedback is greatly appreciated.
    --Erick

    > 1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
    >     (might we encounter a performance problem).
    Never saw any problems, but since I don't have a linux system at hand I cannot tell you for sure.
    Maybe it's about how to open a file with special options like DirectIO if it's a link...
    > 2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
    There's no 'converting'.
    Shutdown the database to offline.
    Now logon to dbmcli and list all parameters there are.
    You'll get three to four parameters per data volume, one of them called
    DATA_VOLUME_TYPE_0001
    where 0001 is the number of the volume.
    open a parameter session and change the value for the parameters from 'L' to 'F':
    param_startsession
    param_put DATA_VOLUME_TYPE_0001 F
    param_put DATA_VOLUME_TYPE_0002 F
    param_put DATA_VOLUME_TYPE_0003 F
    param_checkall
    param_commitsession
    After that the volumes are recognizes as files.
    regards,
    Lars
    Edited by: Lars Breddemann on Apr 28, 2009 2:53 AM

  • Data Distribution in the Data Volumes

    Hello
    Is it important that data be uniformly distributed in the data volumes?
    Is it possible to redistribute the data after adding some more file?
    Thank you & regards,
    T.C.

    As of MaxDB Version 7.7.06.09, such a mechanism can be activated using parameter EnableDataVolumeBalancing.
    If the parameter EnableDataVolumeBalancing (deviating from the default) is set to value YES, all data is implicitly distributed evenly to all data volumes after you add a new data volume or delete a data volume.
    https://service.sap.com/sap/support/notes/1173395

  • Maximum data volume in offline mode supported by Syclo Agentry applications

    Hello Experts,
    We are running the SAP Work Manager application 5.3 with Agentry 6.0.
    We are using the iOS based client available on the App Store to run the same.
    I wanted to ask about how much data volume can be supported by the application when it is running in offline mode.
    As the data is stored on the mobile device, is there any upper limit on the data being stored on the device?
    Appreciate your help on the same.
    Thank you,
    Arihant Kothari
    Tags edited by: Michael Appleby

    There is no upper limit beside how much the device memory.  But the more data you put onto the device the reduce performance you will get.  It is recommend only to download what the user needs.

  • Filling of data volumes

    Hi all,
    I have a question regarding the way maxdb allocates space in data volumes.
    Is there a reorganization of the db data in background or does maxdb just "fill up" the available volumes.
    For example:
    1)
    I create a database with 10 volumes and the space usage is increasing.
    Does the volumes get filled up one after another (first fill volume 0001, then volume 0002, ...) or is there used a balancing over all available volumes?
    2)
    What does happen, if I now have to extend db space and add two new data volume?
    Does maxdb now use only the new volumes or is there a reorganization which leads to an (almost) equal distribution of IO over all datafiles.
    If there is no reorganization of db data, the only way to extend db space without loss in performance would be to add several data volumes or to do an backup/restore procedure?
    Best regards,
    Sascha

    > Hi all,
    Hi Sascha,
    > 1)
    > I create a database with 10 volumes and the space
    > usage is increasing.
    > Does the volumes get filled up one after another
    > (first fill volume 0001, then volume 0002, ...) or is
    > there used a balancing over all available volumes?
    the write load of the database is distributed over all attached data volumes to increase the write speed (it's best to have each volume on it's own i/o channel). So your data volumes will fill to the same level.
    > What does happen, if I now have to extend db space
    > and add two new data volume?
    > Does maxdb now use only the new volumes or is there a
    > reorganization which leads to an (almost) equal
    > distribution of IO over all datafiles.
    There is currently no redistribution of the data in place. The database still uses all available data volumes for writing, but the new (empty) volumes are preferred.
    We are aware of the increased i/o load on the new data volumes, but the automatic balancing of data volumes is not available yet.
    > If there is no reorganization of db data, the only
    > way to extend db space without loss in performance
    > would be to add several data volumes or to do an
    > backup/restore procedure?
    Backup/restore is one viable solution, but you can manually distribute the data by adding a lot more data volumes and deleting the old ones running in online mode as well.
    regards,
        Henrik

  • Time Capsule "Data" Volume won't Mount

    I've been using a Time Capsule (4th Gen/2TB) for both my rMBP Time Machine backups and as a NAS device. After upgrading to Mountain Lion, I've experienced some problems. For the past few weeks, my Time Machine backups have been frequently interrupted with an error message stating that my Backup Disk is non-journaled and non-extended, and to please choose an alternate disk.  Although unsettling, after pressing return - and upon connecting the power cord (as I opted only to backup when using the power cord), Time Machine has always propmtly resumed.  This has probably occurred daily.
    Today however, I've been unable to actually mount the Data volume, although my network is functioning properly, and I can access the Internet.  The Time Capsule status indicator is green, its icon appears in the Finder sidebar, and Airport Utility recognizes it and displays its correct settings.  Upon trying to connect (via Finder), however, an error message states that the server (in this case, the Time Capsule, I guess) can't be located, and suggets I check my network settings and try again.  Needless to say, the settings appear fine.
    An admittedly brief search within this forum yielded no discussions concerning this specific problem, but I'm hoping the community's more knowledgable members will be able to at least provide some helpful insights, if not a solution to this problem.
    The inability to back up my rMBP, access my spasebundle, or manage my externally stored files is very disconcerting.  Any solutions, or insights regarding this issue will be gratefully received. 
    Michael Henke

    There is clearly a bug in Lion that was exacerbated in Mountain Lion, which Apple is yet to fess up to. This loss of connection happens and there is no fix I can give you other than reboot to get the network running again.
    I would strongly recommend against using the TC as a NAS.. it is designed as a backup target for Time Machine.
    If you want a NAS then buy a true NAS, synology or Qnap, being the top products in that field. They have raid and automatic backup with units where you can access hard disks and can replace them. TC is a sealed unit without any way to back itself up. The design was never as a NAS.
    Since your TC is an older one, can you try running 7.5.2 firmware. None of the TC made in the last year or more can go back that far.. they are all stuck on 7.6, the earlier Gen4 did have 7.5.2 which I think has better stability..
    Now exactly what issues come up with ML I am not sure, but I would be interested to hear your experience.
    Please do a backup particularly of your itunes before you start fiddling.
    Is the TC main router in the network or is it bridged? I have a few suggestions for each type of setup, which may keep it running longer.. but nothing is absolute fix.
    Note also the TC seems to be the main problem device.. but the OS does still have issues.. some changes were made with AFP security at Lion, which have not worked very well.

  • Extraction and loading of Training and Event Management data (0HR_PE_1)

    hello,
    I've got the following doubt:
    before BI 7.0 release, extractor 0HR_PE_1 extracts event data (eventid, attendee id, calday,...) but if you load straight to cube 0PE_C01, as Calendar year/month needs a reference date (for example, event start date), you'll get total duration of event in hours or days refered to event star date.
    So in a query filtered by month, you get total duration of an event that starts in the filtered month but it couldn`t end until few months later o year later, so you don´t get appropiate information.
    Example:
    Event          calday        Hours
    10004377  20081120   500        but from event_attr event end date is 20090410.
    In a query filtered by 200811 you get total duration time (500 hours when in 200811 event hours have been 20) or if you filter by any month of 2009 you don´t get information of duration of that event.
    I had to create a copy of standar cube (without calday, only Calendar year/month, Calendar year in time dimension) and disaggrate data creating as many entries for an event as months lasts and adjust calculation of ratios duration of event (duration of event in each month/ total duration of event).
    The question is: Is there any improvement or change on business content in BI 7.0 to treat Training and Event Management data? Has anybody had to deal with that?
    Thank you very much.
    IRB

    Hi,
    TEM data is stored in HRP tables.
    You can load the catalog by creating LSMWs for objects Business event group (L), Business event types (D), Locations (F), Organizers (U) as per requirement.
    LSMW for tcode PP01 can be used to create these objects.
    To create Business Events (E) you can create LSMW for PV10/PV11.
    To book attendee create LSMW for tcode PV08 as here you can specify the actual business event ID which reduces ambiguity.
    tcode PV12 to firmly book events
    tcode PV15 to follow up
    Hope this helps.
    Regards,
    Shreyasi.

  • Training and Event Management Data Load

    Hello Team
    Would appreciate if any of you can advice on how to load Training and Event management data. I think its stored in HRP tables.
    I'm working on an upgrade assignment.
    Thanks

    Hi,
    TEM data is stored in HRP tables.
    You can load the catalog by creating LSMWs for objects Business event group (L), Business event types (D), Locations (F), Organizers (U) as per requirement.
    LSMW for tcode PP01 can be used to create these objects.
    To create Business Events (E) you can create LSMW for PV10/PV11.
    To book attendee create LSMW for tcode PV08 as here you can specify the actual business event ID which reduces ambiguity.
    tcode PV12 to firmly book events
    tcode PV15 to follow up
    Hope this helps.
    Regards,
    Shreyasi.

  • ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified in windows server 2008 r2

    ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified in windows server 2008 r2.I made a application in asp.net c#.I am using ODBC connection.When I deployed my application in windows server2008 r2.There
    is no Microsoft ODBC driver shown in ODBC Data source administrator.Then I go to the C:\Windows\SysWOW64 and open  Odbcad32.exe and add Microsoft ODBC2 driver for Oracle and when I run my application I got following error
    ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
    I am using follwoing string
     <connectionStrings>
    <add name="theconnetion" connectionString="DSN=abdb;UID=abc;PWD=xyz"/>
     </connectionStrings>
    Guide me What I do?

    Did you add a System DSN or a User DSN? If you added a User DSN from your own login, the asp.net application will not be able to use it unless its application Pool in IIS is configured to run under the same credentials that you used for creating
    the DSN. It's better if you add a System DSN.
    Also, be careful to ensure that you are using a 64 bit DSN, unless you configure the application to run in 32 bits. If the 64 bit application attempts to use the 32 bit driver you get the same error message "Data source name not found and no default
    driver specified". See this KB article:
    http://support.microsoft.com/kb/942976/en-us

  • Getting Training and Event Management Data using IT0031

    Scenerio:
    One of the Employee Is Retired. Now we hire it again for the same role ( extending its period after posting it as retired).
    So the good thing that we can use the IT0031 "Reference Personnel Numbers" to get the desired ITs and found that can get other cutomized ITs by checking in the "Copy Infotype" attribute of IT records.
    Now this seems to be valid if we are in the same module
    but waht if i want to get this magic "Referencing of ITs" done for the "Training and Event Managment" Data of the employee.
    Is it Possible Automatically ... or have to go fro ABAP... (I will not prefer ABAP)...
    I hope i have stated the problem clearly ...
    Feedback ... need it ASAP ...

    Hi,
    TEM data is stored in HRP tables.
    You can load the catalog by creating LSMWs for objects Business event group (L), Business event types (D), Locations (F), Organizers (U) as per requirement.
    LSMW for tcode PP01 can be used to create these objects.
    To create Business Events (E) you can create LSMW for PV10/PV11.
    To book attendee create LSMW for tcode PV08 as here you can specify the actual business event ID which reduces ambiguity.
    tcode PV12 to firmly book events
    tcode PV15 to follow up
    Hope this helps.
    Regards,
    Shreyasi.

Maybe you are looking for

  • How Do I Move My Music from ITunes on One PC to a New PC

    My daughter took my old laptop which had all of my music and custom catagories. What files do i need to copy from the old ITunes folder to the new new install on my new PC so that I do not have to read all of my CD's in to the system again and preser

  • IMac hard drive making clicking sound (audio sample)

    A couple of nights ago I realized my five year old iMac was making a sound I have never heard it make before. It is a scratching noise that I assume is coming from my internal hard drive (hard drive is stock, has never been replaced). It isn't a supe

  • Can anyone help with my audio recordings that stopped working?

    Hi, I have audio recordings available in a secure "member only" area of my website. Unfortunately, from time to time, some recordings stop working. I only find out about this when my students / clients tell me. I don't understand why this occurs or h

  • Problem casting org.w3c.dom.Document to oracle.xml.parser.v2.XMLDocument

    I have the following problem: I get my xml-documents as an XMLType from the database and want to compare these using the supplied Oracle class oracle.xml.differ.XMLDiff (i use the java version supplied with Oracle 9i r2). XMLType.getDOM() returns the

  • Open .webloc files in Windows

    I was suffering on getting .webloc files open on my Windows PC. I found a thread http://discussions.apple.com/thread.jspa?messageID=7244343 but it was archived, so I couldn't reply no longer to it, but I noticed that it was never solved. Since I didn