Implementing PaaS (CloudFoundry/BOSH) feeds a huge number of (unwanted) ProtectionServers into DPM
Hi
We have a Hyper-v cluster with VM's on Cluster Shared Volumes, we are using System Center (2012 R2) VMM and DPM for backing up the core infrastructure and a selection of vm's from some of our VMM clouds.
Ongoing work is to implement a PaaS with Hyper-V/System Center - with use of CloudFoundry/BOSH. Due to a lot of unity tests many short lived vm's are made that we never want to backup with DPM. It looks like that the DPM Agent on the Hyper-v hosts in the
cluster feeds all vm's that it sees into DPM as a ProtectionServer - even though these VM's do not have any DPM Agent installed.
Doing this SQL in the DPM Database:
SELECT count(*) FROM [DPMDB_MyDPM].[dbo].[vw_DPM_Server]
Gives 5531 entires. A huge number of these entires are from the PaaS CloudFoundry/Bosh created VM's in the IaaS. These are VM's that never have been backed up by DPM (Or have any DPM agent installed).
We see the same with use of powershell:
PS C:\> $ps = Get-ProductionServer | Where-Object {$_.Name -like "*bosh_vm_being_created*"}
PS C:\> $ps.length
5294
PS C:\>
* The huge number of (unwanted) Production Servers in our DPM causes that our DPM is going slower and slower. And we see that our DPM SQL Database is working more and more. Today we are only taking backup of around 20 VM's, the SystemCenter MSSQL and a few
hosts with DPM agent.
Question 1 - How can we remove these unwanted ("bosh_vm_being_created") ProductionServers from our DPM? They do not have any DPM agent installed, they have no recovery point in DPM. But still they are listed in DPM as a ProductionServer. Why?
Question 2 - How can we configure DPM to filter out these PaaS/CloudFoundry/Bosh VM's so that they do not reach the DPM system?
Br. Rune
Hi
Unfortunately I have no solution on my case yet. The number of ProductionServers in our DPM server is growing and growing. And our DPM is going slower and slower.
PS C:\> $ps = Get-DPMProductionServer -DPMServerName <DPMhostname>
PS C:\> $ps.length
8525
PS C:\>
It must be an purg-job that is not cleaning out old VM's (objects) from the DPM server I guess. I our VMM we only have around 200 VM's. So most of the ProductionServers in our DPM is old VM's that no longer exist.
When we try to use
Remove-ProductionServer.ps1 powershell to remove one of these ProductionServers - we get an error because the VM do no longer exist (and the VM do not have any agent installed).
Do anyone have any experience with this?
Br. Rune
Similar Messages
-
Hi All,
I am asked to create a file to IDOC scenario in PI. The problem is, the file will have around 200,000 records, 96MB. That means I have to get the 200,000 records from the file and create 200,000 PO IDOC at once. I know this is not possible. Does any one have this experience? How did you solve the problem?
Thanks a lot!
CharlesFew ways to implement this.
Though the file has huge number of records, you can tweak or control the number of idocs creating at the reciever side.
Refer michal blog for edit the occurence of target idoc structure to sent the number of idocs as per the need.
The specified item was not found.
https://wiki.sdn.sap.com/wiki/display/XI/File%20to%20Multiple%20IDOC%20Splitting%20without%20BPM
if your sender side is flat file then in the content conversion you set the parameter Recordsets per message like 100 or so.. so that you create 100 idocs each time from the sender message structure. Refer SDN forum for fcc parameters and sender fcc adapter scenario.
Refer this thread
Recordsets per Message in File adapter -
How can i organize a huge number of events?
I just began with mac and iphoto. I imported a lot of photos from windows (google picasa) in iphoto (>17000). Now I have a huge number of events, which i would like to sort by e.g. years and months. Within the years and months, the events should be placed.
Is this possible or what would you suggest to handle a lot of photos?
Thank you in advance for any helpful answer.
Alumschalumsch wrote:
I just began with mac and iphoto. I imported a lot of photos from windows (google picasa) in iphoto (>17000). Now I have a huge number of events, which i would like to sort by e.g. years and months. Within the years and months, the events should be placed.
Is this possible or what would you suggest to handle a lot of photos?
Thank you in advance for any helpful answer.
Alumsch
Assuming that you have events sorted by date (event menu ==> sort events) your events will be sorted by date & time - you can not create a substructure within events - it is a large flat set of photos
If you want a hierarchal structure use albums and folders - albums hold photos and folders hold albums or other folders - you also can use smart albums to instantly find all photos from a date or a date range - or the search window in the lower left
Events a a very basic, inflexible and pretty much automatic organization - just a starting point to hold photos
I generally merge trips into a single event and leave the other time based - others merge even more having events like 1st quarter 2010 etc
LN -
Lightroom or Photoshop Elements for administrating huge number of photos?
Dear photo experts,
at home, we have a huge number of photos we took over years. We are looking for a software able to organize all of them. We currently have:
Adobe Photoshop Elements 10 (Mac)
Adobe Lightroom 3 (Mac)
However, for organizing photos we so far use none of them but a third software (here, we are not happy about how it handles our huge catalog of photos).
Our photos are stored on a server (network attached storage), organized by date and event.
I read that Adobe Photoshop Elements cannot organize photos stored on a network drive.
Now my questions:
- Can Adobe Lightroom organize photos stored on a network drive?
- Is Adobe Lightroom capable to organize a huge amount of photos in one single catalog (separating them via tags)?
What are your experiences?
Thanks a lot!
JMickeyI read that Adobe Photoshop Elements cannot organize photos stored on a network drive.
I thought the opposite was true
Can Adobe Lightroom organize photos stored on a network drive?
As we say here in Rochester, NY, YES it can
Is Adobe Lightroom capable to organize a huge amount of photos in one single catalog (separating them via tags)?
Yes, this is one of Lightroom's strengths
What are your experiences?
I use Lightroom for all of my photo management (yes, I said ALL). I never use the operating system for photo management. Lightroom works great. People here in this forum who have much larger catalogs than I do (over 1/4 million photos) also use Lightroom to manage their photos. -
How do I delete a huge number of duplicate albums from my itunes for mac library?
I have a huge itunes music library, over 900 GB, and as it turns out I have a huge number of duplicate albums. Is there a way to automatically get rid of the duplicate albums so I am left with only one of each? If not, what is the best way to identify and remove my library's duplicate albums? Thanks!
Cheers,
GerryiTunes does not have any automated way of deleting duplicates.
You can identify the duplicates by
View > Show Duplicates
or
Alt + View > Show exact Duplicates
(or if you are on windows Shift+ View > Show Exact duplicates
There is a third party tol which is free to download and then register. Although the copy I tried worked fine as a trial download
Tune Sweeper -
Huge number of files in the profile directory with at sign in the name
Hi,
I noticed that my wife's Firefox v35 running on Windows 8.1 32bit has a huge number of files like:
cert8@2014-03-25T19;02;18.db
content-prefs@2014-01-30T21;28;58.sqlite
cookies@2014-01-08T18;12;29.sqlite
healthreport@2015-01-20T06;44;46.sqlite
permissions@2015-01-19T10;26;30.sqlite
webappsstore@2015-01-20T06;44;48.sqlite
Some files are quite new.
The original files get somehow backed up, but I cannot figure out how. My own PC does not contain such files.
ThanksI've called the big guys to help you. Good luck.
BTW, did you post this from the wife's computer?
Type '''about:support''' in the address bar and press '''Enter.'''
Under the main banner, press the button; '''Copy Text To Clipboard.'''.
Then in the reply box at the bottom of this page,
do a right click in the box and select '''Paste.'''
This will show us your system details.
'''No Personal Information Is Collected.''' -
Query using system parameter LEVEL returns incorrect huge number of records
We migrate our database from Oracle *9.2.0.6* to *11.2.0.1*
The query below throws "ORA-01788: CONNECT BY clause required in this query block".
select * from (
+select a.BOARD_ID, code, description, is_displayable, order_seq, board_parent_id, short_description, IS_SUB_BOARD_DISPLAYABLE, <font color=blue>LEVEL</font> child_level, sp_board.get_parent_id(a.board_id) top_parent_id, is_top_selected isTopSelected+
from boards a, ALERT_MESSAGE_BOARD_TARGETS b
where a.board_id = b.board_id and is_displayable = 'Y' and alert_message_id = 5202) temp
start with board_parent_id = 0
connect by prior board_id = board_parent_id
ORDER SIBLINGS BY order_seq;
Based from online resources we modified "*_allow_level_without_connect_by*" by executing the statement.
alter system set "_allow_level_without_connect_by"=true scope=spfile;
After performing the above, ORA-01788 is resolved.
The new issue is that the same query above returns *9,015,853 records in 11g* but in *9i it returns 64 records*. 9i returns the correct number of records. And the cause for 11g returning greater number of records is due to system parameter <font color=blue>LEVEL</font> used in the query.
Why 11g is returning an incorrect huge number of records?
Any assistance to address this is greatly appreciated. Thanks!The problem lies in th query.
Oracle <font color=blue>LEVEL</font> should not be used inside a subquery. After <font color=blue>LEVEL</font> is moved in the main query, the number of returned records is the same as in 9i.
select c.BOARD_ID, c.code, c.description, c.is_displayable, c.order_seq, c.board_parent_id, c.short_description, c.IS_SUB_BOARD_DISPLAYABLE, <font color=blue>LEVEL</font> child_level, c.top_parent_id, c.isTopSelected
from (
select a.BOARD_ID, code, description, is_displayable, order_seq, board_parent_id, short_description, IS_SUB_BOARD_DISPLAYABLE, sp_board.get_parent_id(a.board_id) top_parent_id, is_top_selected isTopSelected
from boards a, ALERT_MESSAGE_BOARD_TARGETS b
where a.board_id = b.board_id and is_displayable = 'Y' and alert_message_id = 5202
) c
start with c.board_parent_id = 0
connect by prior c.board_id = c.board_parent_id
ORDER SIBLINGS BY c.order_seq -
Huge number of idle connections from loopback ip on oracle RAC node
Hi,
We have a 2node 11gR2(11.2.0.3) oracle RAC node. We are seeing huge number of idle connection(more than 5000 in each node) on both the nodes and increasing day by day. All the idle connections are from VIP and loopback address(127.0.0.1.47971 )
netstat -an |grep -i idle|more
127.0.0.1.47971 Idle
any insight will be helpful.
The server is suffering memory issues occasionally (once in a month).
ORA-27300: OS system dependent operation:fork failed with status: 11
ORA-27301: OS failure message: Resource temporarily unavailable
Thanksuser12959884 wrote:
Hi,
We have a 2node 11gR2(11.2.0.3) oracle RAC node. We are seeing huge number of idle connection(more than 5000 in each node) on both the nodes and increasing day by day. All the idle connections are from VIP and loopback address(127.0.0.1.47971 )
netstat -an |grep -i idle|more
127.0.0.1.47971 Idle
any insight will be helpful.
The server is suffering memory issues occasionally (once in a month).
ORA-27300: OS system dependent operation:fork failed with status: 11
ORA-27301: OS failure message: Resource temporarily unavailable
Thankswe can not control what occurs on your DB Server.
How do I ask a question on the forums?
SQL and PL/SQL FAQ
post results from following SQL
SELECT * FROM V$VERSION; -
Every time I go to a site online I get a message that a huge number of errors occurred going to it. Why?
I can load that site no problem.
If you see Develop in the Safari menu bar (top of your screen) click Develop.
If you see any check ✔ marks, select that item one more time to deselect.
Then try that site.
If the Develop menu was not available, go to Safari > Preferences > Extensions
If there are any installed, turn that OFF, quit and relaunch Safari to test.
And in Safari > Preferences > Security
Make sure Enable plug-ins, and Java are enabled and deselect Bllock pop-up windows
Quit and relaunch Safari.
If installled, try temporarily disabling anti virus software. -
Slow due to huge number of tables
Hi,
unfortunately we have a really huge number of tables in the ( Advantage Server ) database.
About 18,000 + tables
Firing the acitveX preview thru RDC, or just running a preview in the designer slows down to a crawl.
Any hints? ( Besides get rid of that many tables )
Thanks
OskarHi Oskar
The performance of a report is related to:
External factors:
1. The amount of time the database server takes to process the SQL query.
( Crystal Reports send the SQL query to the database, the database process it, and returns the data set to Crystal Reports. )
2. Network traffics.
3. Local computer processor speed.
( When Crystal Reports receives the data set, it generates a temp file to further filter the data when necessary, as well as to group, sort, process formulas, ... )
4. The number of record returned
( If a SQL query returns a large number of records, it will take longer to format and display than if was returning a smaller data set.)
Report design:
1. Where is the Record Selection evaluated.
Ensure your Record Selection Formula can be translated in SQL, so the data can be filtered down on the Server, otherwise the filtering will be done in a temp file on the local machine which will be much slower.
They have many functions that cannot be translated in SQL because they may not have a standard SQL for it.
For example, control structure like IF THEN ELSE cannot be translated into SQL. It will always be evaluated in Crystal Reports. But if you use an IF THEN ELSE on a parameter, it will convert the result of the condition to SQL, but as soon as uses database fileds in the conditions it will not be translated in SQL.
2. How many subreports the report contains and in section they are located.
Minimise the number of subreports used, or avoid using subreports if possible because
subreports are reports within a report, and if you have a subreport in a details section, and the report returns 100 records, the subreport will be evaluated 100 times, so it will query the database 100 times. It is often the biggest factor why a report takes a long time to preview.
3. How many records will be returned to the report.
Large number of records will slow down the preview of the reports. Ensure you only returns the necessary data on the report, by creating a Record Selection Formula, or basing your report
off a Stored Procedure, or a Command Object that only returns the desired data set.
4. Do you use the special field "Page N of M", or "TotalPageCount"
When the special field "Page N of M" or "TotalPageCount" is used on a report, it will have to generate each page of the report before it displays the first page, therfore it will take more time to display the first page of the report.
If you want to improve the speed of a report, remove the special field "Page N of M" or "Total Page Count" or formula that uses the function "TotalPageCount". If those aren't used when you view a report it only format the page requested. It won't format the whole report.
5. Link tables on indexed fields whenever possible.
6. Remove unused tables, unused formulas, unused running totals from the report.
7. Suppress unnecessary sections.
8. For summaries, use conditional formulas instead of running totals when possible.
9. Whenever possible, limit records through selection, not suppression.
10. Use SQL expressions to convert fields to be used in record selection instead of using formula functions.
For example, if you need to concatenate 2 fields together, instead of doing it in a formula, you can create a SQL Expression Field. It will concatenate the fields on the database server, instead of doing in Crystal Reports.
SQL Expression Fields are added to the SELECT clause of the SQL Query send to the database.
11. Using one command as the datasource can be faster if you return only the desired data set.
It can be faster if the SQL query written only return the desired data.
12. Perform grouping on server
This is only relevant if you only need to return the summary to your report but not the details. It will be faster as less data will be returned to the reports.
Regards
Girish Bhosale -
Huge number of Managed Properties
I've recently started at a new company and inhertied the existing SharePoint Farm. I've looking at search as it seems quite slow crawling content. One thing I have noticed is that there is a huge number of Managed Properties >5000. There are
pages and pages like the one below.
There are only ~1800 Crawled Properties so I'm not really sure why there are so many Managed Properties.
I have noticed that the SharePoint and Office categories have the 'Automatically generate a new managed property' enabled. The farm uses a number of 3rd party addons and I'm not sure it they are responsible or not at this point or if they Require the
Automatically generate Properties option.
Just wondering if anyone had seen this or may have an idea?
CheersHey Scott, thanks for replying.
The managed Property Mapping looks normal it's mapped to a single Crawled Property.
Although there are >4000 Managed Properties mapped to a single
Crawled Property which is weird.
I don't know where the Crawled property is coming from though. This is 2010 so I can't use the
SiteCollection property of the Get-SPEnterpriseSearchMetadataCrawledProperty command to filter. I'm not sure there is another way of figuring that out.
I'll probably end up trying to delete all of these Mapped Properties or Just create a new Search Service Application and start from scratch. -
Huge number of garbage collected objects
We're running a system here with the java heap set to 256mb and have noticed
that now and then, garbage collection takes a horribly long time to complete
(in the order of minutes, rather than fractions of a minute!). Something
like 3 million objects are being freed when the server is heavily loaded.
Has anyone else experienced this behaviour? Has anyone tested weblogic with
JProfiler/OptimizeIt and found any troublesome spots where many objects are
created? One potential place where this can be happening is in the servlet
logging. Since there is a timestamp that is a formatted date, my guess is
that a new Date object is being created, which is very expensive and hence
might cause many more objects that need to be garbage collected. Can any
weblogic engineers confirm/deny this?Use vmstat to determine if you're swapping. sar would work too.
Swapping is definitely dictated by the OS, but an inordinate amount of
swapping activity just means you get to tune the hardware rather along
with the application.
Jason
Original Message <<<<<<<<<<<<<<<<<<On 2/21/00, 12:45:26 PM, "Hani Suleiman"
<[email protected]> wrote regarding Re: Huge number of
garbage collected objects:
Here are the results from running top on that machine:
Memory: 512M real, 14M free, 553M swap in use, 2908M swap free
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
3035 root 50 59 0 504M 334M sleep 308:42 5.13% java
How to make sure I'm not swapping? I thought that kind of thing was dictated
by the OS...
Rob Woollen <[email protected]> wrote in message
news:[email protected]..
If GC takes on the order of minutes to run then I suspect that you
are
paging. How much physical memory do you have on the machine? Make sure
that
you are not swapping.
-- Rob
Hani Suleiman wrote:
We're running a system here with the java heap set to 256mb and have
noticed
that now and then, garbage collection takes a horribly long time tocomplete
(in the order of minutes, rather than fractions of a minute!).
Something
like 3 million objects are being freed when the server is heavilyloaded.
Has anyone else experienced this behaviour? Has anyone tested weblogicwith
JProfiler/OptimizeIt and found any troublesome spots where many
objects
are
created? One potential place where this can be happening is in theservlet
logging. Since there is a timestamp that is a formatted date, my guessis
that a new Date object is being created, which is very expensive andhence
might cause many more objects that need to be garbage collected. Can
any
weblogic engineers confirm/deny this? -
Huge number of unprocessed logging table records found
Hello Experts,
I am facing one issue where huge number of unprocessed logging table records were found in SLT system for one table. I have check all setting and error logs but not found any evidence that causing the unprocessed records. In HANA system also it shows in replicated status. Could you please suggest me something other than to replicate same table again, as that option is not possible at this moment.Hi Nilesh,
What are the performance impacts on the SAP ECC system when multiple large SAP tables like BSEG are replicated at the same time? Is there a guideline for a specific volume or kind of tables?
There is no explicit guideline since aspects as server performance as well as change rate of the tables are also relevant. As a rule of thumb, one dedicated replication job per large table is recommended.
from SLT
How to enable parallel replication before DMIS 2011 SP6 do not ignore its for SP06 == go through
How to improve the initial load
Regards,
V Srinivasan -
Improve Indesign performance with a huge number of links?
Hi all,
I am working on a poster infographic with a huge number of links, specifically around 4500. I am looking to use indesign over illustrator for the object styles (vs graphic styles in illustrator) and for the interactive capacity.
The issue I am having is indesign's performance with this many links. My computer is not maxed out on resources when indesign is going full power, but indesign is still very slow.
So far, here are the things I have tried:
Display performance to fast
Switching from link AI files to SVGs
Turning off preflight
Turning off live-draw
Turning off save preview
Please let me know if you have any suggestions on how to speed up indesign! See below system specs
Lenovo w520
8GB DDR3 @1333mhz
nVidia 2000M, 2GB GDDR
Intel Core i7 2760QM @ 2.4GHz
240GB Samsung 840 SSD
Adobe CS6
Windows 8
The only other thing I can think to try is to break up the poster into multiple pages/docs and then combine it later, but this is not ideal. Thank you all for your time.
Cheers,
Dylan HalpernI am not a systems expert, but I wonder if you were to hide the links, and keep InDesign from accessing them that it might help. Truly just guessing.
Package the file so all the graphics are in a single folder. Then set the File Handling Preferences so InDesign doesn't yell at you when it can't find the links.
Then quit InDesign and move the folder to a new place. Then reopen InDesign. The preview of the graphics will suck, but it might help. And one more thing, close the Links panel. -
Co88 - for huge number of production orders -running for hours
Hello All,
I have a issue, in our manufacturing plant, we have huge number of production orders created every day, almost 1200 per day. During month end it takes almost 8 to 10 hours for running settlement, we find very difficult in doing month end closing. We also attempted parallel processing, it errors out.
I heard that, the co88 program looks each order by order checking the status, like closed, and thats the reason, its time consuming, i am not sure how far its true.
I am sure this general issue, people might have come across, can anybody share their experience. How to overcome this issue to run the settlement with possible minimal time, is there any note. please guide me
thanks
best regards
gjYes, this is a generic issue. Most clients manage by marking the orders as closed/deleted, so that these orders are not considered for period end settlement.
Also consider note 545932, and search notes fo further help.
Maybe you are looking for
-
hey guys i installed the sqlite and sqlite-net as requested in other answers i found, changed to x86, and add the .db3 file in my project. but the problem is how to retrive the data that already in the file( i am not creating any table or data in the
-
Help upgrading from iphone 4 to iphone 5. itunes can't see my iOS 5.1.1 iphone
I was in the process of transfering info from two iphones (4) to two new phones (5). Both 4s running 5.1.1. I plugged the first into itunes, back up and upgraded to iOS 6. This forced me to upgrade itunes as well. Now when I plug in the second phone
-
Problems to restrict access to a page when the user belong to more than 1 group
I have realized that Dreamweaver on a coldfusion document only works fine when the user only belongs to a single group, this is because the code supplied by dreamweave when you use the option "Restrict access to a page" at "Server behaviors" it assum
-
Teredo Tunneling Network Driver. How to reinstall missing driver?
System Description: T60p 8244-J2U - Vista Business - Trend Micro As the Lenovo ToolBox was indicating an error message (triangle) in Device Manager and I was unable to update the Teredo Tunneling pseudo (something) Network Driver, I uninstalled it th
-
Update Country of Origin in Inbound Delivery(SHipping Notification)
Hi All: I am using GN_DELIVERY_CREATE function module to create inbound shipping notification, in that i am updated the coo in SPE_HERKL field of the internal table XKOMDLGN for each item. This CoO is update in each item of the inbound shipping notif