Unix data volume filesystem number query

Hi,
Does anyone have a definitive answer as to whether more than 1 data filesystem is supported in MaxDB 7.8 on Unix? 
I cant easily find an answer in any of the guides or on Marketplace.  I'm presuming yes and it should be no problem, certainly in my experience on other sites there are always sapdata1, 2, 3 etc. but this is my first MaxDB site and there is only 1 sapdata FS here. We need to extend the data area but the volume group is restricted in size so we are looking at all possible options (I already know what the other options are, so I just want to know about a secondary data FS).
If someone can point me to the official SAP answer to this (or provide one) that would be much appreciated.
Thanks,
Chaz.

Hi Chaz,
Does anyone have a definitive answer as to whether more than 1 data filesystem is supported in MaxDB 7.8 on Unix?
Answer is Yes. I have already configured SAP systems with sapdata1, sapdata2, sapdata3 and sapdata4 on MaxDB On Unix.
I have done it for Livecache database.
I cant easily find an answer in any of the guides or on Marketplace. I'm presuming yes and it should be no problem, certainly in my experience on other sites there are always sapdata1, 2, 3 etc. but this is my first MaxDB site and there is only 1 sapdata FS here. We need to extend the data area but the volume group is restricted in size so we are looking at all possible options (I already know what the other options are, so I just want to know about a secondary data FS).
When you install SAP on MaxDB database default it shows only single sapdata with its size. On the same screen there is an option to add another sapdata with similar or higher size. Path of each sapdata<X> can be changed in the same screen.
For eg. On the OS you have 4 file systems sapdata1, sapdata2, sapdata3, sapdata4.
While Installation I may create 4 sapdata<x> and distribute the same as shown below.
/sapdb/<SID>/sapdata1                      2000MB
/sapdb/<SID>/sapdata2                      2000MB
/sapdb/<SID>/sapdata3                      2000MB
/sapdb/<SID>/sapdata4                      2000MB
Similarly I can create a separate filesystem for log
/sapdb/<SID>/saplog                        2000MB
Note: You may not find such information written in SAP notes as there is no standard definition like how many sapdata<x> you may have in your system. SAP installation process expects minimum 1 sapdata<x> and 1 saplog<x> partitions for installation to complete in MaxDB.
Regards,
Deepak Kori

Similar Messages

  • Ref Cursor Data Volume

    Is there a limit to the volume of data a ref cursor can return via an Oracle Database Procedure call? I am using a ref cursor to return data and testing using toad, it hangs the session. My Business Object report also hangs because of the large data volume > 750,000 rows returned via a ref cursor.
    The database procedure works fine when the number of rows is less than 2,000.
    Has anyone had this problem before?
    Many Thanks,
    Georgie

    George wrote:
    Is there a limit to the volume of data a ref cursor can return via an Oracle Database Procedure call? No.
    {thread:id=886365}
    Re: OPEN cursor for large query
    A ref cursor is a pointer to a compiled SQL statement, it has no rows so there is no limit to the number of rows that you can use it to fetch, just like there is no limit to the number of rows a select can return.
    I am using a ref cursor to return data and testing using toad, it hangs the session. My Business Object report also hangs because of the large data volume 750,000 rows returned via a ref cursor. This is very confusing, it it hangs how do you know it returns 750,000 rows?

  • Error while trying to retrieve data from BW BEx query

    The following error is coming while trying to retrieve data from BW BEx query (on ODS) when the Characters are more than 50.
    In BEx report there is a limitation but is it also a limitation in Webi report.
    Is there any other solution for this scenario where it is possible to retrieve more than 50 Characters?
    A database error occured. The database error text is: The MDX query SELECT  { [Measures].[3OD1RJNV2ZXI7XOC4CY9VXLZI], [Measures].[3P71KBWTVNGY9JTZP9FTP6RZ4], [Measures].[3OEAEUW2WTYJRE2TOD6IOFJF4] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [ZHOST_ID2].[LEVEL01].MEMBERS, [ZHOST_ID3].[LEVEL01].MEMBERS ), [ZHOST_ID1].[LEVEL01].MEMBERS ), [ZREVENDDT__0CALDAY].[LEVEL01].MEMBERS ) ........................................................ failed to execute with the error Invalid MDX command with UNSUPPORTED: > 50 CHARACT.. (WIS 10901)

    Hi,
    That warning / error message will be coming from the MDX interface on the BW server.  It does not originate from BOBJ.
    This question would be better asked to support component BW-BEX-OT-MDX
    Similar discussion can be found using search: Limitation of Number of Objects used in Webi with SAP BW Universe as Source
    Regards,
    Henry

  • Converting data volume type from LINK to FILE on a Linux OS

    Dear experts,
    I am currently running MaxDB 7.7.04.29 on Red Hat Linux 5.1.  The file types for the data volumes were
    initially configured as type LINK and correspondingly made links at the OS level via "ln -s" command. 
    Now (at the OS level) we have replaced the link with the actual file and brought up MaxDB.  The system
    comes up fine without problems but I have a two part question:
    1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
        (might we encounter a performance problem).
    2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
    Your feedback is greatly appreciated.
    --Erick

    > 1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
    >     (might we encounter a performance problem).
    Never saw any problems, but since I don't have a linux system at hand I cannot tell you for sure.
    Maybe it's about how to open a file with special options like DirectIO if it's a link...
    > 2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
    There's no 'converting'.
    Shutdown the database to offline.
    Now logon to dbmcli and list all parameters there are.
    You'll get three to four parameters per data volume, one of them called
    DATA_VOLUME_TYPE_0001
    where 0001 is the number of the volume.
    open a parameter session and change the value for the parameters from 'L' to 'F':
    param_startsession
    param_put DATA_VOLUME_TYPE_0001 F
    param_put DATA_VOLUME_TYPE_0002 F
    param_put DATA_VOLUME_TYPE_0003 F
    param_checkall
    param_commitsession
    After that the volumes are recognizes as files.
    regards,
    Lars
    Edited by: Lars Breddemann on Apr 28, 2009 2:53 AM

  • Date formats in BI Query designer

    Hi gurus
    We have enhanced our cube 0sd_co3 where we have taken a number of dates such as Railway Receipt date, Actual Goods Issue date, LFdat, thses are only few examples. Now our reporting scenerio demands reporting based on thses dates as given in our time dimensions such as
    YYYYMM        Qty
    but not based on the date mapped in the time dimension in the cube. but based on these dates
    in oracle we have the to_char function through which we change the format of date while writing sql query
    so if there is any option of changing date format in query without changing the modelling
    Pl suggest
    Thanks
    Shivani

    Hi,
    You can create new characteristic of char number 6 . Include it in the data target. Populate it via a update rule ( Routine).
    Use input as standard date (Railway Receipt date) and then pass the value to the new char in the required format (YYYYMM). Follow same thing for other dates also.
    Let me know if you have doubt.
    Regards,
    Viren

  • Using RSCRM_BAPI with Huge Data Volume

    Hi,
    I am using RSCRM_BAPI to extract a query output into a database table. But, the query returns a large volume of data. I am not sure whether RSCRM_BAPI works fine when data volume is huge. Please suggest whether using this will be a good design or any other method is available to take care of such a scenario.
    Regards,
    Dibyendu

    I have used RSCRM_BAPI when the records were exceeding 65000(limitations of excel) and it worked for me...
    I think should work for you also....
    But there are some limitations....
    like u cannot see texts etc,,,,
    Assign point if it helps,
    Ajay

  • Oracle UNIX Data Time Issue/Question

    I have a project where I need to display some data from a commercial software package. All of the tables except 1 store date values in NUMBER columns as UNIX time. I can convert these no problem, but one table has date values stored in FLOAT(126) columns and I cannot figure out how to convert them to get a valid, and accurate, date from them.
    For example, the column contains the value 38513.5775115741 which in the application front end is displayed as Friday, June 10, 2005. Does anyone see a "formula" for this?
    Maybe it's obvious and I've been trying too hard or looking at it too long for it to make sense to me....
    Thanks in advance.

    This looks close:
    SQL > select to_char(date '1900-01-01'  + 38513.5775115741,'fmDay, Month dd, yyyy','nls_date_language=american') from dual
      2   /
    TO_CHAR(DATE'1900-01-
    Sunday, June 12, 2005
    1 rij is geselecteerd.So maybe you should use 30-12-1899 or the number was really 38511.5775115741 instead of 38513.5775115741 ?
    Regards,
    Rob.

  • Performance: How to manage large reports with high data volume

    Hi everybody,
    we actually make some tests on our BO server system, to define limitations and oppertunities. Among other things we constructed a large report with a high data volume (about 250.000 data records).
    When executing the query in SAP Query Designer it takes about 10 minutes to display it. In Crystal Reports we rebult the row and column structure of the query. The data retrieval in Crystal Reports Designer last about 9 minutes - even faster as in the query.
    Unfortunately in BO InfoView the report is not displayed. After 30 minutes of loading time we get a timeout error RCIRAS0244.
    com.crystaldecisions.sdk.occa.managedreports.ras.internal.ManagedRASException:
    Cannot open report document. ---
    The request timed out because there has been no reply from the server for 600.000 milliseconds.
    Also a refresh of an report with saved data is not possible.
    Now we are asking us some questions:
    1. Where can we set the timeout for InfoView to a value larger than 30 minutes?
    2. Why is InfoView so slow compared to Crystal Designer? Where is the bottleneck?
    3. Whats the impact of SAP single sign-on compared to Enterprise logon on the performance?
    Thanks for any helps and comments!
    Sebastian

    Hi Ingo,
    thank you for your reply.
    I will check the servers and maybe change the time limits.
    Unfortunately we have a quite slow server system that probably cause this timeout. In CR Designer we have no problems, its really quick. Is it to expect that CR Designer and InfoView have almost the same performance?
    Another nice point: When we execute the query in SAP BEx Query Designer it takes about 10 minutes to open it, in Crystal Designer it needs just about 5-6 minutes. We integrated exactly the same fields in the report, which exist in die SAP BEx query.
    What may cause the difference?
    - Exceptions and conditions in the query?
    - Free characteristics in the query?
    - anything else?
    Best regards,
    Sebastian

  • Recover my portege original volume serial number

    Dear All,
    I am using Portege Z830, and i have changed the volume serial number for hard disk "SSD" in this case. Can you pleae advise how can i recover the original serial number, or at least to know what was it?
    Thanks
    Belal

    If you did not take the precaution to transfer any data that was on the HDD before you sent it off for warranty repair, then you must accept the reality that the data on the partition is gone. Saving impoprtant data to an external storage device is something that we try to advise people to do before they send in their noteboook before repair.
    It is a norm that the operating system is reimaged, whether on a new or the original hard disk before return to a client. That is done to put the notebook back into the factory state in which it was originally deivered in. 
    ****Please click on Accept As Solution if a suggestion solves your problem. It helps others facing the same problem to find a solution easily****
    2015 Microsoft MVP - Windows Experience Consumer

  • Missing physical media and volume license number

    Recently, we had a set of installer/content disks, case, and volume license number disappear. My employer, an art school, legally purchased a volume license giving us 107 licenses for Design Premium Creative Suite 3 for official school use.
    We are fairly certain that the physical media and license serial number in question were all stolen from us, and we have no clue what is presently being done with the whole lot. We fear that the unknown individuals will use our volume license serial number to exceed the number of installations we are legally allowed.
    We have already found relevant information regarding having the physical media replaced. We want to know what we must do as a school to continue using the software we legally purchased, whether it means being issued a new license serial number or not. Will we suddenly find ourselves unable to install and/or update CS3 around campus? Are we required to buy a new volume license?

    Hi,
    By using the below statemnt you can achive all the details of SM04.
    DATA :   USR_TABL       TYPE USRINFO    OCCURS 1 WITH HEADER LINE.
    CONSTANTS: OPCODE_LIST                     LIKE TH_OPCODE VALUE 2.
    DATA : TH_OPCODE(1)         TYPE X.
    * SM04 session details of users logged accessing SAP APO
      CALL 'ThUsrInfo' ID 'OPCODE' FIELD OPCODE_LIST
        ID 'TABUSR' FIELD USR_TABL-*SYS* .
    READ TABLE USR_TABL WITH KEY BNAME = <USer-name>.
    by double clicking this you can get the all the detsils of the User.
    regards,
    Prabhudas

  • Data Volume - Calculation performance

    Hi,
    We are experience degrading calculation performance as data volume increases.
    We are implementing BPC 7.5 SP05 NW (on BW 7.0. EHP1).
    An allocation script that ran in 2 minutes when the database contained only 800,000 records, took over 1 hour after the database was populated with a full year of data.
    All logics have been written to calculate on a reduced defined scope but it does not seem to improve the execution time. When I check the formula log, the scope is respected.
    The application is not that large either: 12 dimensions, the largest containing 300 members and 3 hierarchical levels.
    We optimize the database but to no avail.
    What can be done to optimize performance? Are there any technical settings in BPC or BW that can be fine-tuned?
    Thanks,
    Regis

    Hi Ethan,
    Take a look at one of the allocation script: http://pastebin.com/TA16xCd3
    We are testing RUNLOGIC but we are facing some problems in two situations:
    - passing the DM package variable to the RUNLOGIC script
    - using a passed variable in the called script
    The DM prompts for 3 selections: ENTITY, TIME and CATEGORY.
    The RUNLOGIC script:
    *SELECT(%DIVISIONS%,"[ID]",DIVISION,"[LEVEL]='DIV' AND [STORECOMMON]<>'Y'")
    *SELECT(%BRANCHES%,"[ID]",BRANCH,"[BRANCHTYPE]='STORE'")
    *START_BADI RUNLOGIC
         QUERY=OFF
         WRITE=ON
         LOGIC=ALLOC_DIV_ACTUAL_S.LGF
         DIMENSION ENTITY=C1000
         DIMENSION TIME=FY10.MAY
         DIMENSION CATEGORY=ACTUAL
         DIMENSION DIVISION=%DIVISIONS%
         DIMENSION DIVISION=%BRANCHES%
         CHANGED=ENTITY,TIME,CATEGORY,DIVISION,BRANCH
         DEBUG=ON
    *END_BADI
    In ALLOC_DIV_ACTUAL_S.LGF, we are using a %DIVISION_SET% variable. At the time of validating, we get a message "Member "" does not exist".
    When we run the package, it fails with the same error message:
    An exception with the type CX_UJK_VALIDATION_EXCEPTION occurred, but was neither handled locally, nor declared in a RAISING clause
    Member "" not exist
    Thanks
    Regis

  • Anyone using durable topics with high data volumes?

    We're evaluating JMS implementations, and our requirements call for durable subscribers, where subscribers can go down for several hours, while the MQ server accumulates a large number of messages.
    Is anyone using Sun MQ in a similar scenario? How is it holding up?
    Sun folks, do you know of production installations that use durable topics with high data volumes?
    thanks,
    -am

    We are using a cluster of Sun�s JMS MQ 3.6 with durable message queues and persistent topics. In a 4 hour window each night we run over 20,000 messages through a queue. The cluster sits on two Windows servers, the producer client is on a AIX box and the consumer is running on a iSeries. Within the 20,000 messages are over 400,000 transactions. Each message can have many transactions. Yes, the iSeries client has gone down twice and the producer continued with the message queue pilling up, as it should. We just use the topic to send and receive command and status inquiries to the clients. So every thing works fine. We have only had a couple issues with a client locking and that maybe fixed with sp3, we are in the process of installing that. The only other issue we have had is that once in a while the producer tries to send an object message with to many transactions and it throws and JMS exceptions. So we put a cap on the size of the messages, if its over a set number of transactions it send each transaction as separately, otherwise it sends all the transactions in one object type (linked list of transactions) message. Compare the cost of this JMS system with Tibco or Sonic and you�re looking at big savings.

  • WD mycloud fehler 004 : Data volume failed to mount

    Hallo, Seit einiger zeit habe ich nurnoch eine rote LED und kein zugriff auf meine WD Mycloud 2TB ins Mycloud daschboard komme ich noch sporadisch. zeigt aber fehler 004 Haupsächlich hatte ich die mycloud als speicher für fotos genutzt, die natürlich nirgendwo anderst gespeichert sindKomme ich irgendwie wieder an meine Fotos ?? DankeMatthias  

    Hast Du mal über das Dashboard "nur System" auf Werkseinstellungen zurück gesetzt? Wenn es dann icht geht, kannst Du den Support kontaktieren. Die werden Dir aber vermutlich nicht dabei helfen Deine Daten wiederzubekommen, sondern bestenfalls ein neues/repariertes Gerät ohne Deine Daten senden. Ansonsten hatte hier jemand das gleiche Problem lösen können:http://community.wd.com/t5/WD-My-Cloud/Data-volume-failed-to-mount/td-p/631497Versuchen kannst Du es: This is "my" solution, may not work for everyone. But it would seem that datablocks would corrupt when power is abruptly cut to the drive. SSH must be enabled for this to work. 1. Log onto the drive, root and welc0me are the default username/password combo.2. Make sure your paritions are intact. # parted -l will tell you this.3. Type # mke2fs -n /dev/sda4. This will get you the filesystem, but more importantly at the end give you the superblock backup locations by block location.4. Pick one, then enter this command. e2fsck -b blocknumber /dev/sda4 It will look for bad blocks and ask for your confirmation to continue/ignore then write over any corrupted ones. Yes to ignore, yes to rewrite. It will do up to five passes of diagnostics, prompting if it finds anything out of the ordinary such as file countes and errant inodes. I answered yes to everything. When it is complete it will either report "Killed" or a summary. Type # reboot and let the drive rebuild its library. 30-ish minutes later (depending on how much data you had on it) you should have access to your files.

  • How to get Physical Address and Volume Serial number of system

    Hi Experts,
    Is there any method or FM by which I can get System's Physical Address and Volume Serial Number ?
    I want to validate a report specific to a system.
    Regards,
    Nitin Karamchandani.
    Edited by: Nitin Karamchandani on Dec 30, 2009 2:36 PM

    Hi,
    By using the below statemnt you can achive all the details of SM04.
    DATA :   USR_TABL       TYPE USRINFO    OCCURS 1 WITH HEADER LINE.
    CONSTANTS: OPCODE_LIST                     LIKE TH_OPCODE VALUE 2.
    DATA : TH_OPCODE(1)         TYPE X.
    * SM04 session details of users logged accessing SAP APO
      CALL 'ThUsrInfo' ID 'OPCODE' FIELD OPCODE_LIST
        ID 'TABUSR' FIELD USR_TABL-*SYS* .
    READ TABLE USR_TABL WITH KEY BNAME = <USer-name>.
    by double clicking this you can get the all the detsils of the User.
    regards,
    Prabhudas

  • Key Date variable; Interval in query properties for Time Dep. Masterdata

    Hi,
    I've been searching on the forum, and I think I know the answer already, but still I'd like
    to ask you all whether it's possible to create a Key Date Interval variable to be used in the query properties.
    As far as I can see you can only report by a single key date used in the query properties field for time dependant masterdata, but my customer has asked me to investigate the possibility to enter a date range.
    As all masterdata is time dependant I don't see how this would work, but if someone can shine a light on this maybe there is a solution available?
    Thanks for your help.
    M.

    Hi marc
    I understand your problem.
    I had an idea for a workaround.
    If you can create master data compounded with Valid from and Valid to both of these characteristics , how the query is computing
    the keyfigures.
    Every change in valid from or valid to is unique for the system in this case.
    Thanks
    N.Ganesh

Maybe you are looking for