SAP file system allocation
Hi Experts,
I Like to install SAP ECC6.0 In unix/AIX./Linux wat are the necessary file system to create & how much allocate the size.
By,
s.senthil
hi,
thanks..tat link is uesful for my reference.
i need wat r the recommended folders to create and sizing in unix.
for example :
/usr/sap/trans/ -> 2 GB
/usr/sap/<sid> - > 5 GB
if any ideas send link .
By,
S.senthil
Similar Messages
-
How to see my z programs in sap file system
hello everyone
my Z programs are deleted due t some server problem from my sap ecc 6.0 system is there any way to get my programs from sap file system
if yes then where exactly in windows folder my zprogramms are stored.......>
elinuk wrote:
> my Z programs are deleted due t some server problem from my sap ecc 6.0 system is there any way to get my programs from sap file system
If your Z Programs are present in other server viz., QAS / PROD, then you can import the programs to your DEV server.
if yes then where exactly in windows folder my zprogramms are stored.......
I am not sure about programs associated to unreleased transport requests (TRs). But when you release your TR, the data gets stored in COFILES & DATAFILES in the transport directory (DIR_TRANS) of the app server.
Hope this helps.
BR,
Suhas -
SAP File System are being updated at Storage Level and as well as in Trash
Hi Friends,
We are facing a strange but serious issue with our linux system. We have multiple instance installed on it but one of the instance's file system are being visible in Trash.
The exact issue is this.
1. We have db2 installed on Linux now one of our instance's Mount Points are being available in LInux Trash if i create a file at storage level i.e /db2/SID/log_dir/touch test it will be dynamically updated in the trash and will be create in Trash also.
2 this can not be a normal behavior of any O.S.
3 if i delete any file from trash related to this particular SID (instance) file will be deleted from the actual location.
i know this is not related to SAP Configuration but i just want to find out root cause analysis. if any Linux expert can help in this issue waiting for an early reply.
Regards,Hi Nelis,
I think you have misinterpreted this issue. let me explain you in detail. we have following mount points in storage including SAP installed on it.
/db2/BID
/db2/BID/log_dir
/db2/BID/log_dir2
/db2/BID/log_archive
/db2/BID/db2dump
/db2/BID/saptemp1
/db2/BID/sapdata1
/db2/BID/sapdata2
/db2/BID/sapdata3
/db2/BID/sapdata4
Now i can see the same mount points in the Trash of linux and if i create a folder ./ file in any of the above mentioned mount point it will be dynamically updated in Trash and if i delete some thing at storage / os level the same will be deleted from trash and vis versa.
I have checked everything no softlink exists anywhere but i am not sure about storage / os level thats what i want to find out?
Regards, -
SAP file system restoration on other server
Dear Experts,
To check that our offline file system backup is successful, we are planning to restore the offline file system backup from the tape on to a new test server.
Our current SAP system (ABAP only) is in cluster with CI running on one node (using virtual host name cicep) & DB running on another node (using virtual host name dbcep).
Now, is it possible to restore the offline file system backup of the above said cluster server on to a single server with a different host name?
Please help on this.
Regards,
Ashish KhanduriDear Ashish
We want to include file system backup process as part of our backup strategy. To test the waters, we are planning to take a backup of the at filesystems level. Following are the filesystems in our production systems.
We have a test server (hostname is different), without any filesystems created beforehand.
I want to know:
1. Which filesystems will be required from the below:
/dev/hd4 4194304 3772184 11% 5621 2% /
/dev/hd2 10485760 6151688 42% 43526 6% /usr
/dev/hd9var 4194304 4048944 4% 4510 1% /var
/dev/hd3 4194304 2571760 39% 1543 1% /tmp
/dev/hd1 131072 129248 2% 85 1% /home
/proc - - - - - /proc
/dev/hd10opt 655360 211232 68% 5356 18% /opt
/dev/oraclelv 83886080 73188656 13% 11091 1% /oracle
/dev/optoralv 20971520 20967664 1% 4 1% /opt/oracle
/dev/oracleGSPlv 83886080 74783824 11% 18989 1% /oracle/GSP
/dev/sapdata1lv 833617920 137990760 84% 3189 1% /oracle/GSP/sapdata1
/dev/sapdata2lv 623902720 215847400 66% 82 1% /oracle/GSP/sapdata2
/dev/sapdata3lv 207093760 108510632 48% 24 1% /oracle/GSP/sapdata3
/dev/sapdata4lv 207093760 127516424 39% 28 1% /oracle/GSP/sapdata4
/dev/origlogAlv 20971520 20730080 2% 8 1% /oracle/GSP/origlogA
/dev/origlogBlv 20971520 20730080 2% 8 1% /oracle/GSP/origlogB
/dev/mirrlogAlv 20971520 20762848 1% 6 1% /oracle/GSP/mirrlogA
/dev/mirrlogBlv 20971520 20762848 1% 6 1% /oracle/GSP/mirrlogB
/dev/oraarchlv 311951360 265915600 15% 526 1% /oracle/GSP/oraarch
/dev/usrsaplv 41943040 41449440 2% 165 1% /usr/sap
/dev/sapmntlv 41943040 20149168 52% 565823 21% /sapmnt
/dev/usrsapGSPlv 41943040 25406768 40% 120250 5% /usr/sap/GSP
/dev/saptranslv 41943040 5244424 88% 136618 18% /usr/sap/trans
IDES:/sapcd 83886080 4791136 95% 18878 4% /sapcd
GILSAPED:/usr/sap/trans 41943040 5244424 88% 136618 18% /usr/sap/trans
2. Is it possible to directly backup the filesystems (like /dev/oracleGSPlv)? This requirement is because, when I backup (using tar) /oracle, all the folders in /oracle, like /oracle/GSP, /oracle/GSP/sapdata1 etc, are also backed up. I do not want it. I would like to backup the filesystems directly.
3. Which unix backup tools are used to backup the individual filesystems?
4. How do we restore the filesystems to the test server?
Thanks for your advise.
Abdul
Edited by: Abdul Rahim Shaik on Feb 8, 2010 12:10 PM -
SAP File System Access - SLD Naming Convention/Suggestions
I would like to access our ECC file system to pick up files we will use to create Idocs.
I'm wondering the best way to describe the file system access in the SLD.
I have a business system for the main client on the ECC system (BS_ED1CLNT010) for example but the OS isn't client specific. I could use this as the Business System in the scenario and define a file adapter that connects to the unix server.
Any thoughts?Maybe I didn't frame my question properly.
In the ECC system we have multiple clients (20,30,40, etc). If I am going to post an Idoc to a client in this system I need to define each as a business system in the SLD and import this to the Integration Directory. So I would have BS_XXXCLNT010, BS_XXXCLNT020, etc, one for each client. These all share the same Technical System. If I want to post an Idoc to a client on the ECC system I have to define a Business System and interface to that and every client that will receive an Idoc. (as well as the ALE settings on the ECC)
Each of these reside on the same SAP server (sap00001 for example) and there is a directory (/public for example) on this server. This isn't client specific.
I wish to pick up a file from the ECC file system and post a client on the ECC system (maybe different ones based on the data in the file).
I don't want to define the file adapter under BS_XXXCLNT020 since it isn't specifc to client 020 although that would work.
Do I create a new TS in the SLD as third party, stand alone java, and a BS for that? TS_XXX_FILESYSTE (3rd party). BS_XXX_FILESYSTEM for the TS?
I'm really looking for clarity in the definition of the SLD. -
SAP Backups ; R/3 & file system
Hello,
What is the different between SAP R/3 backup and SAP File System backup?
what are the files that backing up in each cases
To which back up does the oracle DB fit in. (Data Files,Control files, Redo Logs, SPFILE)
To which back up does the O/S files fit in. (here Suse O/S)
thankxThere is not any concept of SAP file system backup.....it means OS level file system backup.
SAP R/3 backup can backed up all the sapdata(database) files, controlfiles as well as redologs.
OS filesystem level backup can backed up all the files based on your requirement and rotation.
Both backups you can use for restorations.....SAP R/3 backup will take all the responsibilities in backup as well as in restoration.
but in OS level backup first you must know which files need to be backed up and what exactly the all directoruies contains.
Regards,
Nick Loy -
Hello everybody!
is there a simple possibility to download the whole data from e.g. table MARA in background in the SAP file system with SAP standard applications?
The requirement of our customer is that every evening should a job start which collects the whole data from various tables like MARA or VBAK etc. and write them in CSV-files which should be saved in the SAP file system.
Of course it's possible to write a simple report which could do this but I wonder if there is any SAP standard application which could do this???
Thanks for any helpful answers!!!
ChristianHi Christian,
If you look at the Quickview in Basis mode, there are some output options under "Export as". One of these may help.
When you run the quickview the selection screen should give you a range of output options - I had hoped that you might be able to choose to write to a SAP file from there, although I think it may be limited to local (PC) files only, which I don't think is what you want.
As an alternative, how about looking at the database backup utilities?
I would think your basis people should be able to help you set up a single-table backup. Would that satisfy the requirement?
Hope this is some help to you,
Best Regards
Robin -
Please Advise on SAP and DB2 file system during installation
Dear Experts
SAP ERP 6.0 EHP5 ( AS ABAP and AS JAVA(EP))
DB2
RHEL 6
i have a requirment to install solution manager,development and production server, at present client is allocated below hardware for 3 servers and one MSA Storage
Landscape: Two system landscape
DEV Server: 2*300GB HDD 2cpu/32gb ram
PRD server: 2*300GB HDD 2cpu/32gb ram
solution manger: 2*300GB HDD 1cpu/16gb ram
MSA Storage HP P2000 G3 MSA 12*300Gb HDD
1 TAPE DEVICE
these three server should to connected to storage
Its my first time to work on this kind of landscape with SAN
Request to please provide me filesytems and mountpoint with size. However i am going through standardinst document & trying to understand...
Regards
RajuRAJU83 wrote:
> Request to please provide me filesytems and mountpoint with size. However i am going through standardinst document & trying to understand...
Better you continue with that and plan and document the file system and size strategy.
'SAP Sizing' is very important, utilize it - service.sap.com/sizing
After you understand the guides' advice and explore standard sizing, then share your queries with us.
This concept is very detailed one and really the 'subjective' one - depends case to case - customer to customer - ultimately the SIZING will help you - It can't be explained clearly in a discussion thread 'initially'
Thanks -
SAP GoLive : File System Response Times and Online Redologs design
Hello,
A SAP Going Live Verification session has just been performed on our SAP Production environnement.
SAP ECC6
Oracle 10.2.0.2
Solaris 10
As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
1/
We have been told that our file system read response times "do not meet the standard requirements"
The following datafile has ben considered having a too high average read time per block.
File name -Blocks read - Avg. read time (ms) -Total read time per datafile (ms)
/oracle/PMA/sapdata5/sr3700_10/sr3700.data10 67534 23 1553282
I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
2/
We have been asked to increase the size of the online redo logs which are already quite large (54Mb).
Actually we have BW loading that generates "Chekpoint not comlete" message every night.
I've read in sap note 79341 that :
"The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
Frankly, I have problems undertanding this sentence.
Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
But how is it that frequent chekpoints should decrease the time necessary for recovery ?
Thank you.
Any useful help would be appreciated.Hello
>> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
The recommended ("standard") values are published at the end of sapnote #322896.
23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
>> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
Correct.
>> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
Every dirty block in the buffer cache is written down to the datafiles
The latest SCN is written (updated) into the datafile header
The latest SCN is also written to the controlfiles
If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
Regards
Stefan -
Upload of Campaign and Leads from flat file into SAP CRM system
Hi Gurus,
We need to upload Campaign and Leads from our legacy systems to SAP CRM systems. The source data will be available in the form of flat files (tab delimited)
Please let me know the possible ways of doing this.
Reward points are assured.
Thanks in advance.Hi
you can use external list management functionality in CRM 5.0 useing the flat file tab delimited you can upload the contacts and create business partners followed by lead transaction types with business partners created and also external list management will allow you to create target groups for executing the campaigns in SAP CRM
please do reward points if helpful
regards
Dinaker vikas -
Flat File Extract - SAP BW System Directory
Hi,
I want to export a flat file using data manager so that it is saved in the SAP BW System Directory i.e. the /SAP/ directories.
How can I do this? When i use data manager it only gives me options to save in the SAP BPC directory structure.
Cheers,Hi Leo,
Look on the chain CPMB/EXPORT_TD_TO_APPL
Vadim -
File system recommendation in SAP -AIX
Dear All ,
Pls check my current file system of PROD .
Filesystem GB blocks Used Free %Used Mounted on
/dev/prddata1lv 120.00 66.61 53.39 56% /oracle/IRP/sapdata1
/dev/prddata2lv 120.00 94.24 25.76 79% /oracle/IRP/sapdata2
/dev/prddata3lv 120.00 74.55 45.45 63% /oracle/IRP/sapdata3
/dev/prddata4lv 120.00 89.14 30.86 75% /oracle/IRP/sapdata4
1, How much Space is recommended to increase the total space of the sapdata1..4 file system in SAP.
Currently we have assigned 120GB to each datafile. if its any recommendation of sapdata1,2,3,4 total space , how can proceed for further ?
ie ---> is there any limitation to increase the sapdata file system in SAP .
example sapdata1,...4 ---> max limit to increase is 200 GB like that .....
2,secondly only sapdata4 is keep on growing and rest of the file system are not much .
Kindly suggets
Edited by: satheesh0812 on Dec 20, 2010 7:13 AMHi,
Please identify what are the activities that were taking place which was consuming your space so rapidly.
Analyze if the same activities are going to be performed in future, if so for how many months.
Based on the comparision of present growth trend you can predict future trend and decide. Also analyze which table/tablespace is growing fast and what were its related data files and where are they located, you need to increase that file system to high level rather increasing all file systems sapdata1...4.
Based on gathered inputs you can easily predict future growth trends and accordingly increase your size with additional space of 10%.
You can increase your file system there is no limitation such, but you will face performance related problems. For this you need to create a separate tablespace and move the heavily growing table data files into that table space with proper index build.
Regards.
Edited by: Sita Rr Uppalapati on Dec 20, 2010 3:20 PM -
Export data from SAP Document Management System to File System(FileStore)
Hi,
We need to extract/ export data (documents and metadata) from SAP Document Management System to windows File System (File Store), can anyone suggest us tool or methodology to do the same.
Thanks,
NileshI'm also looking for a solution for this problem. We are capturing comments in BW-BPS layouts. They get stored in BW's document management system and we would like to export them out of the system for external reporting into an ACCESS database.
-
Setin CS for DMS ( SAP DB) not file system
Hi Forum..
I wud like to configure CS to my DMS to chk in /chkout originals which i hav to do it from 3rd party system.For this i'm using a BAPI.
My clint req is he wanted to save documents in SAP DB( Not file system)
My questions r:
1. wat r the necessary config steps tht i hav to follwo ?
2. for this,Do i need to create Any Ztable ?
Thanks In Advance.
P.S: Use ful answer will be rewarded.
Rgds,
VIjayHi Vijay,
To achieve storage of documents in SAP Database as per your clients requirement you need not use Content Server thru KPro.
Also, there is no need of adding any Z table or any other configurations,
*Just goto DC10, Uncheck box indicating Use Kpro and to maintain SAP Database in place of CS, just go below to File Size option and enter the size required in SAP Database (for ex:9999999999 which is max. size you can allot).
Save your settings and then create/change the DIR, try to check-in the document where instead of showing KPro storage categories (ex: DMS_C1_ST), options related to SAP storage categories are displayed, from storage categories like
Vault, Archieve n SAP DB* select SAP DB and all the storage takes place in SAP Database.
When you are using Content Server with KPro, even if you are maintaining file size with some values only KPro will work as storage category
Thank You,
Manoj
Pl. reward if useful. -
Does /sapmnt need in cluster file system(SAP ECC 6.0 with oracle RAC)
We are going to be installing SAP with Oracle 10.2.0.4 RAC on Linux SuSE 10 and OCFS2. The Oracle RAC documentation states:
You must store the following components in the cluster file system when you use RAC
in the SAP environment:
- Oracle Clusterware (CRS) Home
- Oracle RDBMS Home
- SAP Home (also /sapmnt)
- Voting Disks
- OCR
- Database
What I want to ask is if I really need put SAP Home(also /sapmnt) on cluster file system? I will build a two nodes oracel 10g RAC and I also have another two nodes to install SAP CI and DI. My orginial think is sapmnt is a NFS share, and mount to all four nodes(RAC node and CI/DI), and all oracle stuff was on OCFS2(only two rac nodes are OCFS), anybody can tell me if SAP Home(also /sapmnt) can be NFS mount not OCFS2, thanks.
Best regards,
PeterHi Peter,
I don't think you need to keep /sapmnt in ocfs2 . Reason any file system need to be in cluster is,in RAC environment, data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network and preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
AS this applies to redo files, datafiles and control files only , you should be fine with nfs mount of /sapmnt sharing across and not having ocfs2.
-SV
Maybe you are looking for
-
Cisco Jabber for Windows 9.2.6 (setting registry keys)
All, I would like to be able to set the general option for Cisco Jabber to force the client to, "Start Ciso Jabber when my computer starts" option. Is this a registry setting I can force? I'd also like to force calendar intergration with Microsoft
-
Error in Follow up transaction for quote created in r/3 in CRM
Hi CRM Gurus, I need an information on a requirement ... I have maintained the middleware settings for transferring the quotation from r/3 to CRM . Now I want to create a contract as follow up document based on this quotation... is it possible? if ye
-
Calendar will no longer sync with iPhone
My calendar would sync with no problems and now it won't update the information on my iphone so I am stuck with the old information and having to manually update the calendar. Please help. Using version 6.0 of calendar Using version 10.7 of iTunes Us
-
Query help: using ytd hours earned to calcuate hours accrued each month
We’d like to calculate the hours accrued per month for each employee on both sick and vacation leaves between January and June. (EmployeeID 002 just started in April). We only know the year-to-date hours earned for each employee and hours accrued eac
-
How do I submit my e book to the i book store
How do I submit my e-pub book to the i store and get approved as quickly as possible?