MaxDB data volumes usage
Hi all,
I have the following configuration for my MaxDB (which is a part of a DMS system).
Version is MaxDB 7.6
Data volumes are:
1) 2.GB
2) 2.GB
3) 10GB
4) 10GB
5) 10GB
6) 3GB
7) 3GB
8) 3GB
It's seems that for volumes 3, 4 & 5 (10GB size) only 4.5GB are used.
My question is: why the free 5.5GB of the 10GB volumes is not used?
Is it a configuration issue or does MaxDB determine where to write automatically?
I don't see any performance problems.
Thanks,
Omri
> I have the following configuration for my MaxDB (which is a part of a DMS system).
> Version is MaxDB 7.6
>
> Data volumes are:
> 1) 2.GB
> 2) 2.GB
> 3) 10GB
> 4) 10GB
> 5) 10GB
> 6) 3GB
> 7) 3GB
> 8) 3GB
>
> It's seems that for volumes 3, 4 & 5 (10GB size) only 4.5GB are used.
They will be used - just put more data into your content server (?).
> My question is: why the free 5.5GB of the 10GB volumes is not used?
> Is it a configuration issue or does MaxDB determine where to write automatically?
No, there is nothing wrong here.
MaxDB choses the datavolumes to save changed/new pages in a order based on their relative filling.
Nevertheless as long as there is freespace in a volume it is possible that this will get used as well.
Maybe you're thinking of a feature that existed until SAP 7.4 where the database would even out the filling degree in times of low activity. This feature is not present anymore - instead changed pages are relocated during savepoints.
> I don't see any performance problems.
Why should you? Content server usually does not put the same level of traffic to the database as a OLTP system would. For that you can use different sized data volumes without any performance drops.
regards,
Lars
Similar Messages
-
Hello experts
We have added a SAN storage to our server and no we want to move the data volumes to the SAN is there a way to do it which i shorter than migration?and are there any parameters that we will need to change to point to the new data location
your ideas will be welcomeHi Sanjay,
I was able to find the blog just by using the SCN search...
http://scn.sap.com/community/maxdb/blog/2008/09/11/questions-to-sap-support-how-to-move-maxdb-volumes
Have fun,
Lars -
Managing Data Volumes in MaxDB
Hello,
Due to an upgrade from ERP 6.0 to ERP 6.0 EHP5, I had to add new data volumes to my MAXDB database.
I created 6 data volumes of 9.5GB and 2 data volumes of 38GB.
i am not too worried about the 9.5 GB data volumes since they are close to 50% free.
However the 38GB data volumes are filled to only around 13%. This consumes a lot a space on my server.
Is there anyway to reorganize the database? I need to free-up some space on the server.
I am new to MaxDB.
Thanks,
Suhan HegdeHello Suhan Hegde,
1. Please see the documents "Deleting Data Volumes" and "Volumes (Permanent Storage) " in MAXDB library at
http://maxdb.sap.com/doc/7_8/44/d77a6368113ee3e10000000a114a6b/content.htm
2.
As you need to free-up some space on the server, please check that you have the permanent dataarea usage less as 9.5*6 GB first. Create the complete databackup, just to be safe. Then delete the datavolumes of the size 38GB when database is online, the data from the specified data volumeswill be distributed to the remaining data volumes. More details in recommended document "Deleting Data Volumes", see 1., or you could use db_deletevolume, see document at
http://maxdb.sap.com/doc/7_8/44/eefb7ab942108ee10000000a11466f/content.htm
or
you could do it using backup/restore procedure steps: create complete backup, initialize instance for restore and change the datavolumes configuration, then continue with restore.
3. If you are SAP customer => Please review SAP notes:
SAP Note No. 1173395
SAP note No. 1423732 < see 13. >
4. There are online traning sessions at
http://maxdb.sap.com/training/
Regards, Natalia Khlopina -
Hi all,
i get errors during installation procedure of MaxDB 7.8.02.27 at the point where i define the Database Volume paths (step 4b).
If i use the default values the database gets created without errors.
But if i do changes e.g. to the size of the data volume, the error appears when i click next:
"Invalid value for data volume size: data size of 0KB does not make sense Specify useful sizes for your log volumes".
If i create 2 data files with different names (DISKD0001, DISKD0002), i get an error mesage that i have used one filename twice.
Now its getting strange: If i use the previous button to move one step back and then use the next button again, it sometimes
accepts the settings and i´m able to start the installation and the database gets created.
I´m remote on a VMWare server 2008 R2 (EN) and i´m using the x64 package of MaxDB.
Any ideas?
Thanks
Martin SchneiderHi Martin,
A general system error occurrs if file *.vmdk is larger than the maximum size supported ... It has to be replaced with the nearest acceptable value associated with the various block sizes so that you can use to create a datastore.
You may need to resize your block size while choosing VMFS datastore.
Hope this is useful.
Regards,
Deepak Kori -
Hi all,
I have a question regarding the way maxdb allocates space in data volumes.
Is there a reorganization of the db data in background or does maxdb just "fill up" the available volumes.
For example:
1)
I create a database with 10 volumes and the space usage is increasing.
Does the volumes get filled up one after another (first fill volume 0001, then volume 0002, ...) or is there used a balancing over all available volumes?
2)
What does happen, if I now have to extend db space and add two new data volume?
Does maxdb now use only the new volumes or is there a reorganization which leads to an (almost) equal distribution of IO over all datafiles.
If there is no reorganization of db data, the only way to extend db space without loss in performance would be to add several data volumes or to do an backup/restore procedure?
Best regards,
Sascha> Hi all,
Hi Sascha,
> 1)
> I create a database with 10 volumes and the space
> usage is increasing.
> Does the volumes get filled up one after another
> (first fill volume 0001, then volume 0002, ...) or is
> there used a balancing over all available volumes?
the write load of the database is distributed over all attached data volumes to increase the write speed (it's best to have each volume on it's own i/o channel). So your data volumes will fill to the same level.
> What does happen, if I now have to extend db space
> and add two new data volume?
> Does maxdb now use only the new volumes or is there a
> reorganization which leads to an (almost) equal
> distribution of IO over all datafiles.
There is currently no redistribution of the data in place. The database still uses all available data volumes for writing, but the new (empty) volumes are preferred.
We are aware of the increased i/o load on the new data volumes, but the automatic balancing of data volumes is not available yet.
> If there is no reorganization of db data, the only
> way to extend db space without loss in performance
> would be to add several data volumes or to do an
> backup/restore procedure?
Backup/restore is one viable solution, but you can manually distribute the data by adding a lot more data volumes and deleting the old ones running in online mode as well.
regards,
Henrik -
Converting data volume type from LINK to FILE on a Linux OS
Dear experts,
I am currently running MaxDB 7.7.04.29 on Red Hat Linux 5.1. The file types for the data volumes were
initially configured as type LINK and correspondingly made links at the OS level via "ln -s" command.
Now (at the OS level) we have replaced the link with the actual file and brought up MaxDB. The system
comes up fine without problems but I have a two part question:
1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
(might we encounter a performance problem).
2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
Your feedback is greatly appreciated.
--Erick> 1) What are the ramifications if MaxDB thinks the data volumes are links when in reality they are files.
> (might we encounter a performance problem).
Never saw any problems, but since I don't have a linux system at hand I cannot tell you for sure.
Maybe it's about how to open a file with special options like DirectIO if it's a link...
> 2) In MaxDB, what is the best way to convert a data volume from type LINK to type FILE?
There's no 'converting'.
Shutdown the database to offline.
Now logon to dbmcli and list all parameters there are.
You'll get three to four parameters per data volume, one of them called
DATA_VOLUME_TYPE_0001
where 0001 is the number of the volume.
open a parameter session and change the value for the parameters from 'L' to 'F':
param_startsession
param_put DATA_VOLUME_TYPE_0001 F
param_put DATA_VOLUME_TYPE_0002 F
param_put DATA_VOLUME_TYPE_0003 F
param_checkall
param_commitsession
After that the volumes are recognizes as files.
regards,
Lars
Edited by: Lars Breddemann on Apr 28, 2009 2:53 AM -
Data Distribution in the Data Volumes
Hello
Is it important that data be uniformly distributed in the data volumes?
Is it possible to redistribute the data after adding some more file?
Thank you & regards,
T.C.As of MaxDB Version 7.7.06.09, such a mechanism can be activated using parameter EnableDataVolumeBalancing.
If the parameter EnableDataVolumeBalancing (deviating from the default) is set to value YES, all data is implicitly distributed evenly to all data volumes after you add a new data volume or delete a data volume.
https://service.sap.com/sap/support/notes/1173395 -
Livecache data cache usage - table monitor_caches
Hi Team,
We have a requirement of capturing the Data cache usage of Livecache on an hourly basis.
Instead of doing it manually by going into LC10 and copying the data into an excel, is there a table which captures this data on a periodic basis which we can use to get the report at a single shot.
"monitor_caches" is one table which holds this data, but we are not sure how we can get the data from this table. Also, we need to see the contents of this table, we are not sure how we can do that.
As "monitor_caches" is a maxdb table I am not sure how I can the data from this table. I have never worked on Maxdb before.
Has anyone had this requirement.
Warm Regards,
VenuHi,
For Cache usage below tables can be referred
Data Cache Usage - total (table MONITOR_CACHES)
Data Cache Usage - OMS Data (table MONITOR_CACHES)
Data Cache Usage - SQL Data (table MONITOR_CACHES)
Data Cache Usage - History/Undo (table MONITOR_CACHES)
Data Cache Usage - OMS History (table MONITOR_CACHES)
Data Cache Usage - OMS Rollback (table MONITOR_CACHES)
Out Of Memory Exceptions (table SYSDBA.MONITOR_OMS)
OMS Terminations (table SYSDBA.MONITOR_OMS)
Heap Usage (table OMS_HEAP_STATISTICS)
Heap Usage in KB (table OMS_HEAP_STATISTICS)
Maximum Heap Usage in KB (table ALLOCATORSTATISTICS)
System Heap in KB (table ALLOCATORSTATISTICS)
Parameter OMS_HEAP_LIMIT (KB) (dbmrfc command param_getvalue OMS_HEAP_LIMIT)
For reporting purpose , look into the following BW extractors and develop BW report.
/SAPAPO/BWEXDSRC APO -> BW: Data Source - Extractor
/SAPAPO/BWEXTRAC APO -> BW: Extractors for Transactional Data
/SAPAPO/BWEXTRFM APO -> BW: Formula to Calculate a Key Figure
/SAPAPO/BWEXTRIN APO -> BW: Dependent Extractors
/SAPAPO/BWEXTRMP APO -> BW: Mapping Extractor Structure Field
Hope this helps.
Regards,
Deepak Kori -
How to measure and limit the data volume?
Morning.
I need a tool to measure and limit the data volume of internet usage. My internet tariff allows a maximum of 5 GB data volume per month. Exceeds my usage that amount the bandwidth will reduce to only 64 kB/s or the exceeding data volume must be paid extraordinarily expensive.
Do you know a tool that measures the data volume in a given time period and can alert or limit the internet connection for instance if the data volume at the half of the months has exceeded more than the half of the data volume for the entire month?
Kind regards, vatolinYou could generate large amount of data and then use any SNMP Viewer (BMC Dahsboard, Solarwinds, Nagios, CiscoWorks etc.) to see the throughput of the interfaces at peak. But why bother? Cisco has been commented by numerous research firms (Gartner etc.) to be very precise about their stated throughputs.
Regards
Farrukh -
Understanding replica volume and recovery point volume usage with SQL Express Full Backup
I am running some trials to test DPM 2012 R2's suitability for protection a set of SQL Server databases and I am trying to understand what happens when I create a recovery point with Express Full Backup.
The databases use simple recovery model and in the tests I have made so far I have loaded more data into the databases between recovery points since that will be a typical scenario - the databases will grow over time. The database files are set to autogrowth
by 10%
I have been looking at the change in USED space in the replica volume and in the recovery point volume after new recovery points and have a hard time understanding it.
After the first test where data was loaded into the database and an Express Full Backup recovery point was created, I saw an increase in used space in the replica volume of 85 Gb and 29 GB in the recovery point volume. That is somewhat more than I think
the database grew (I realize that should have monitored that, but did not), but anyway it is not completely far out.
In the next test I did the same thing except I loaded twice as much data into the database.
Here is where it gets odd: This causes zero increased usage in the replica volume and 33 GB increased use in the recovery point volume.
I do not understand why the replica volume use increases with some recovery points and not with others.
Note that I am only discussing increased usage in the volumes - not actual volume growth. The volumes are still their original size.
I have been using 3-4 days on the test and the retention period is set to 12 days, so nothing should be expired yet.Hi,
The replica volume usage represents the physical database file(s) size. The database file size on the replica should be equal to the database file size on the protected server. This is both .mdf and .ldf files. If when you load data
into the database and you overwrite current tables versus adding new ones, or if there is white space in the database files and the load simply uses that white space, then there will not be any increase in the file size, so there will not be any increase
in the replica used space.
The recovery point volume will only contain delta changes applied to the database files. As the changed blocks overwrite the files on the replica during express full backup, VSS (volsnap.sys) driver copies the old blocks about to be overwritten
to the recovery point volume before allowing the change to be applied to the file on the replica.
Hope this helps explain what you are seeing.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights. -
Hi
I am offloading the content in DMS_CONT1_CD1 table (size 320GB) from ECC system to Content Server based on Maxdb database using Program RSIRPIRL(note 389366). I have installed maxdb instance with 18 data volumes of each 18GB each on content server.
I am not sure how much size should I allocate for log volumes for 320GB database to avoid log full situation during content offload? Content sever will be running with LOG WRITING ON.
Unix team will take the backup with TSM tool after offload is completed.
Is it advisable to put database on LOG OVERWRITE MODE for large data transfer? Please advice.> I am offloading the content in DMS_CONT1_CD1 table (size 320GB) from ECC system to Content Server based on Maxdb database using Program RSIRPIRL(note 389366). I have installed maxdb instance with 18 data volumes of each 18GB each on content server.
> Is it advisable to put database on LOG OVERWRITE MODE for large data transfer? Please advice.
Yes, you might use this feature for your data loading.
All you've to keep in mind here is that if the data load fails in between, you've to restore the content server database to the state before the loading started (there is no automatic resume-functionality available in the data loader).
A nice option to use here is the MaxDB Snapshot.
Just take a backup and a snapshot right before starting to load the data.
In case the database crashes in-between the data load and there is some physical damage to the data volumes, you can use the backup to restore the database.
In case of less severe problems, where you just want to start over the load, you can easily restore the snapshot (which is done in seconds) and restart the data load.
For the log volume size - most customers just go by one log volume of 1GB.
For a content server (and most other applications) that's enough by far and, due to the way MaxDB handles log data and volumes, there is no advantage in having multiple log volumes!
So stick with one as it keeps things easy.
regards,
Lars -
Unable to change stock posting date at usage decision while inspecting HUs
If we were using materials without WMS it's simple: thereu2019s a button in the screen for stock posting by which we're able to change document date and posting date; but we're using WM and the screen is slightly different: the button I'm referring it's gone!
So, how to change the posting date at posting stock in QA11 when we're using handle units? Some end-user told me that in version 4.7 this was possible. I don't think so ...unless there was something customized at WM IMG...or maybe they were using a USER EXIT to bring a pop-up window for this (I'm starting to believe that this was implemented..). This is the first time I work with HUs, so I don't know how to manage this.
Anyone?
Seba
Edited by: Sebastian Sniezyk on Apr 3, 2009 10:16 AMI solved it in this topic: Changing posting date at usage decision for handle units. How?
-
How can you see what data usage was for on the bill? It only shows the date and usage but not the reason for the usage.
You can not see this info on VZW's site. There are apps for smartphones that will break down the info though, but by site or what was downloaded
-
Changing posting date at usage decision for handle units. How?
Does anybody know how to change posting date at usage decision for handle units (HU)?
If the material is not managed by HU, SAP allows this modification (there's a button with a hat on it referring the material document header).
For HU I cannot seem to change posting date!
SebaYes it's possible! Well,...unless for versions 4.6C, 4.7 and ECC 5.
I've found SAP Note #752131 by which SAP declares that you can create the pushbutton from the Screen Painter referring the document header.
That's all,.....so simple, and it works fine!
This note applies ONLY to version 4.6C, 4.7and ECC 5 (which is the version the customer I am with has installed).
Thanks anyway.
Sebas -
I have a web-site (web.mac.com/jmiller22) created using iWeb and hosted on iDisk. A little over a month ago I started posting my internet radio show to my web-site as a podcast. The podcast is also available from the iTunes Music Store. I've now started getting e-mails from .Mac as my Data Transfer usage creeps up (90% according to today's e-mail). I run StatTracker on my site and it does a nice job of tracking site visits but it doesn't appear to track downloads. I'm not finding anything on the .Mac site either (apart from disk usage) and I'm not getting any feedback from iTunes (I'm not sure if I really expected any). How do other folks track podcast downloads and other data transfer usage on .Mac?
Power Mac G5 Mac OS X (10.4.8)James,
I'm doing fine, thanks.
I don't make Podcasts or use these special "feeds" so I'm really not sure of the steps.
iWeb uses special code (the long string identifier) and a javascript file to point to the URL of the file. The RSS created in iWeb and then Published to .Mac does all the magic.
I would think that an edit of the .js file to point to the new location of the file would be all that's needed to keep things working.
Never tired it but it seems like a no-brainer workaround of the bandwidth limits in .Mac service. I guess someone with a file and a server could test the theory.
I've placed many of my "movies" into my iDisk Public folder so one file can do multiple work. It would be the same if I had a different server.
Another method would be to upload the file to the other server first and then make the iWeb page. "Drag" the URL into iWeb and it shouldn't be uploaded to .Mac when the page is published (I think and no way to test).
Maybe you are looking for
-
Notes has become unresponsive on my macbook air, what can I do?
Notes has always worked perfectly, but today, on my Macbook Air, not on my iPhone, if I try to create a new note or message or email a note, the program becomes un-responsive and I have to force quit. Any ideas? My default account is icloud, and ther
-
Does anyone know how to fix this problem? Or even what the problem is? I am trying to subscribe to a calendar and every time I do, it gives me this response: The request for account "iCloud" failed. The server responded with "400" to operation CalD
-
Experts:- in fi business process we can tell ( g/l AR,AP,AA, as the same way we are also tell the client business process) In the same way) WHAT IS THE PRODUCT COSTING BUSINESS PROCESS, assinge the points for u.
-
API / Sdk to play swf files inside a Mac / Windows application
Hi We have set of .swf movie files in our application. These .swf movie files has to be played inside the content area of the window. We want to know how to play .swf movie files through program inside our application. Is there any api / SDK to play
-
BP data services:Can I config Job server via commandline?
Hi Experts, Can I config Job server via commandline? I need to create a new Job server JS_TEST01/port3501 and add the repository to this Job server and restart Job server via command line,like I can create repository via command RepoManBatch.exe C:\P