Is there a vmware system mirror file?
I want study solaris with my dear pc,but I dont want to spend too much time on installment.Anybody who created an environment of solaris based on intel-chip,please leave a message for me.Let study together.
Theres one downloadable from from here
http://developers.sun.com/solaris/downloads/solaris_apps/index.jsp
Its not particularly up to date. But should be good enough for most things.
Similar Messages
-
secondly, i used migration assistant to move everything from old laptop to new. when i used pages on new machine, 10.8.5 operating system, there is no click for file save as in the menu, only flle save. why?
Hold down Option key while clicking on menu bar and Save as will appear.
-
I have a problem with a pdf file which does not open with reader in windows 8 but it opens properly with adobe pdf reader. All other pdf can be opened in reader.But when i open a pdf(see this link for pdf for which i got error http://incometaxsoft.com/temp/Form.pdf)
it gives error as "Can't open this file. There's a problem with file format".
The same file opens properly in adobe pdf reader.You can check the pdf file which i have mentioned in the link above.But the reader which comes with windows 8 can open some other pdf in the same PC.What may be the error causing this??This has turned out to be an enormous issue for me as I sell PDF files as ebooks. I have done a fair amount of investigating this for my system.
My files have to be compatible not just across readers but across operating systems.
To date, I have over 200 PDFs that have functioned flawlessly across Mac, PC (Windows 7 and below), Android, iPhone/iPad, Linux.
I personally test my PDFs using a variety of readers and PDF editors including
PDF XChange (my favorite)
Foxit (runner up for me and I recommend for most people)
Adobe (the bloated monster)
Nitro 9 (great for moving graphical elements around)
ABBYY
And the Nuance PDF Create toolsets
Those are off the top of my head. There are a bunch on Android that I test with too.
I am running the Windows 10 Pro Tech Preview and I have this same problem so I know it isn't fixed yet in any kind of pre-release way (-sigh-)
Here is what I've learned for my situation
The PDFs I created using NUANCE'S PDF CREATE PROFESSIONAL VERSION 8
all fail using the built-in Windows 8/10 PDF reader.
When I look at the PDF properties for these Nuance created files, the underlying engine used to write them is called "ImageToPDF". Using ABBYY it indicates their own engine as does everyone else that I've tried. It is easy for you to check to see
what created your PDF by doing a "Control D" (look at the document properties). Perhaps there's a common engine causing issues.
If I use the exact same source files to create a PDF using any of my other tools I have no issues. I checked the PDF versions made by the tools and they are all set to 1.5.
A customer mentioned being able to convert them in a way they worked by saving them without having to do any kind of extraction, but I have not been able to duplicate that. Perhaps he did a "print" which seems like it could work.
In summary, the workaround everyone is talking about, using an alternate reader, of course works. But not everyone wants to change.
The culprit I have found is my Nuance PDF Creation tools that are using the ImageToPDF engine.
I hope it gets FIXED as I really don't want to have to regenerate all of my PDF files. -
How can i recover my database after losing system data file.
hi everyone,
how can i recover my database in the following scenario.
1. offline complete backup taken 2 days ago. database was in archive mode.
2. today i lost my system data file, and also lost my all archived files.
3. i started up the database but, the following error was generated.
SQL> startup
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: 'D:\ORACLE\ORADATA\ORCL\SYSTEM01.DBF'
4. i copied the system data file from backup and wrote the following statement, to recover the database.
SQL> recover datafile 1;
ORA-00279: change 2234434 generated at 07/15/2009 10:52:10 needed for thread 1
ORA-00289: suggestion : C:\B\ARC00051.001
ORA-00280: change 2234434 for thread 1 is in sequence #51
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
now i don't have any archive file. is there any chance to recover the database ?
R e g a r d s,
Asif Iqbal
Software Engineer,
Lucky Tex, Karachi,
Pakistan.now i don't have any archive file. is there any chance to recover the database ?If no archive log files are available you can't recover the datafile.You need to have all the archives from the time of offline backup was taken till the system datafile is lost.
Anand -
My Time machine has stopped backing up saying there are some read only files.
My Time machine has stopped backing up and says that ther are some read only files preventing this. What steps do I need to take to fix it?
If you have more than one user account, these instructions must be carried out as an administrator.
Launch the Console application in any of the following ways:
☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left.
Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message. Clear the text field and scroll back in the log to that time. Post the messages timestamped from then until the end of the backup, or the end of the log if that's not clear.
Post the log text, please, not a screenshot. If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
When posting a log extract, be selective. Don't post more than is requested.
Please do not indiscriminately dump thousands of lines from the log into a message.
Some personal information, such as the names of your files, may be included — edit that out, too, but don’t remove the context. -
Hello,
I would like to ask you about advice.
We have MSSQL 2008 R2, 32 bit. Memory is 4GB, split into 2GB for Windows and 2GB for applications. Database has recovery model simple because we have replicated data into other servers ( 2 ). Contemporary we work with 2 servers. Max memory for MSSQL is 2048
MB.
We set the backup as follows:
USE MSDB
GO
DECLARE @JMENO_ZALOHY VARCHAR(120)
SELECT @JMENO_ZALOHY = 'E:\backup\BackupSQL\1 Pondeli\DAVOSAM_'+ convert( varchar(2), datepart( hh, getdate() ) ) + '00_DEN_DIFF.bak'
SELECT @JMENO_ZALOHY
BACKUP DATABASE [DAVOSAM]
TO DISK = @JMENO_ZALOHY
WITH INIT, DIFFERENTIAL, CHECKSUM, COMPRESSION
GO
Every second or third day in log there is error message: 'There is insufficient system memory in resource pool 'internal' to run this query' Accurate in time of backup. The error is still repeat, majority in working hours.
Today I have found out, that problem is probably in compression of backup. Because if I removed word: compression, a backup normally runs without error.
Question: Is my hypothesis correct that problem is in backup with compression?
Thank you DavidHello, today evening I have ran backup command bellow. All is OK. Probably MSSQL has cleaned memory. Next attempt I will try in peak next week.
Since time I have removed word compression, in error log is not any error.
I have checked memory as soon as memory gets on top, it is about 1.707 GB the MSSQL writes into log this messgages:
2014-03-14 15:00:04.63 spid89 Memory constraints resulted reduced backup/restore buffer sizes. Proceding with 7 buffers of size 64KB.
2014-03-14 15:00:08.74 Backup Database differential changes were backed up. Database: DAVOSAM, creation date(time): 2014/01/12(22:03:10), pages dumped: 16142, first LSN: 1894063:1673:284,
last LSN: 1894063:1792:1, full backup LSN: 1894053:15340:145, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'E:\backup\BackupSQL\5 Patek\DAVOSAM_1500_DEN_DIFF.bak'}). This is an informational message. No user action is required.
2014-03-14 15:00:12.79 spid72 Memory constraints resulted reduced backup/restore buffer sizes. Proceding with 7 buffers of size 64KB.
2014-03-14 15:00:12.88 Backup Database differential changes were backed up. Database: WEBFORM, creation date(time): 2014/02/01(05:22:47), pages dumped: 209, first LSN: 125436:653:48, last
LSN: 125436:674:1, full backup LSN: 125435:689:36, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'E:\backup\BackupSQL\5 Patek\WEBFORM_1500_DEN_DIFF.bak'}). This is an informational message. No user action is required.
After that the MSSQL reduced memory on 1.692.
USE MSDB
GO
DECLARE @JMENO_ZALOHY VARCHAR(120)
SELECT @JMENO_ZALOHY = 'E:\backup\BackupSQL\6 Sobota\DAVOSAM_'+ convert( varchar(2), datepart( hh, getdate() ) ) + '00_DEN_FULL.bak'
SELECT @JMENO_ZALOHY
BACKUP DATABASE [DAVOSAM]
TO DISK = @JMENO_ZALOHY
WITH INIT, CHECKSUM, COMPRESSION, MAXTRANSFERSIZE=65536
GO
E:\backup\BackupSQL\6 Sobota\DAVOSAM_2100_DEN_FULL.bak
(1 row(s) affected)
Processed 467240 pages for database 'DAVOSAM', file 'DavosAM_Data' on file 1.
Processed 2 pages for database 'DAVOSAM', file 'DavosAM_Log' on file 1.
BACKUP DATABASE successfully processed 467242 pages in 24.596 seconds (148.411 MB/sec).
select * from sys.dm_exec_connections
where net_packet_size > 8192
session_id most_recent_session_id connect_time net_transport protocol_type
protocol_version endpoint_id encrypt_option auth_scheme
node_affinity num_reads num_writes last_read last_write net_packet_size client_net_address
client_tcp_port local_net_address local_tcp_port
connection_id parent_connection_id most_recent_sql_handle
(0 row(s) affected)
SELECT SUM (pages_allocated_count * page_size_in_bytes)/1024 as 'KB Used', mo.type, mc.type
FROM sys.dm_os_memory_objects mo
join sys.dm_os_memory_clerks mc on mo.page_allocator_address=mc.page_allocator_address
GROUP BY mo.type, mc.type, mc.type
ORDER BY 1 DESC;
KB Used type type
29392 MEMOBJ_SORTTABLE MEMORYCLERK_SQLSTORENG
9392 MEMOBJ_SOSNODE MEMORYCLERK_SOSNODE
8472 MEMOBJ_SQLTRACE MEMORYCLERK_SQLGENERAL
5480 MEMOBJ_SECOLMETACACHE USERSTORE_SCHEMAMGR
5280 MEMOBJ_RESOURCE MEMORYCLERK_SQLGENERAL
5008 MEMOBJ_CACHEOBJPERM USERSTORE_OBJPERM
4320 MEMOBJ_SOSSCHEDULER MEMORYCLERK_SOSNODE
2864 MEMOBJ_PERDATABASE MEMORYCLERK_SQLSTORENG
2328 MEMOBJ_SQLCLR_CLR_EE MEMORYCLERK_SQLCLR
2288 MEMOBJ_SESCHEMAMGR USERSTORE_SCHEMAMGR
2080 MEMOBJ_SOSDEADLOCKMONITORRINGBUFFER MEMORYCLERK_SQLSTORENG
2008 MEMOBJ_LOCKBLOCKS OBJECTSTORE_LOCK_MANAGER
1584 MEMOBJ_CACHESTORETOKENPERM USERSTORE_TOKENPERM
1184 MEMOBJ_LOCKOWNERS OBJECTSTORE_LOCK_MANAGER
840 MEMOBJ_SNIPACKETOBJECTSTORE OBJECTSTORE_SNI_PACKET
760 MEMOBJ_SOSDEADLOCKMONITOR MEMORYCLERK_SQLSTORENG
752 MEMOBJ_SESCHEMAMGR_PARTITIONED USERSTORE_SCHEMAMGR
688 MEMOBJ_RESOURCEXACT MEMORYCLERK_SQLSTORENG
616 MEMOBJ_SOSWORKER MEMORYCLERK_SOSNODE
552 MEMOBJ_METADATADB MEMORYCLERK_SQLGENERAL
480 MEMOBJ_SRVPROC MEMORYCLERK_SQLCONNECTIONPOOL
424 MEMOBJ_SQLMGR CACHESTORE_SQLCP
400 MEMOBJ_SBOBJECTPOOLS OBJECTSTORE_SERVICE_BROKER
384 MEMOBJ_SUPERLATCH_BLOCK MEMORYCLERK_SQLSTORENG
384 MEMOBJ_RESOURCEDATASESSION MEMORYCLERK_SQLGENERAL
352 MEMOBJ_SOSSCHEDULERMEMOBJPROXY MEMORYCLERK_SOSNODE
328 MEMOBJ_SBMESSAGEDISPATCHER MEMORYCLERK_SQLSERVICEBROKER
320 MEMOBJ_METADATADB USERSTORE_DBMETADATA
296 MEMOBJ_INDEXSTATSMGR MEMORYCLERK_SQLOPTIMIZER
264 MEMOBJ_LBSSCACHE OBJECTSTORE_LBSS
224 MEMOBJ_XE_ENGINE MEMORYCLERK_XE
216 MEMOBJ_GLOBALPMO MEMORYCLERK_SQLGENERAL
208 MEMOBJ_PROCESSRPC USERSTORE_SXC
200 MEMOBJ_SYSTASKSESSION MEMORYCLERK_SQLCONNECTIONPOOL
200 MEMOBJ_REPLICATION MEMORYCLERK_SQLGENERAL
192 MEMOBJ_SOSSCHEDULERTASK MEMORYCLERK_SOSNODE
176 MEMOBJ_SQLCLRHOSTING MEMORYCLERK_SQLCLR
168 MEMOBJ_SYSTEMROWSET CACHESTORE_SYSTEMROWSET
128 MEMOBJ_RESOURCESUBPROCESSDESCRIPTOR MEMORYCLERK_SQLGENERAL
128 MEMOBJ_CACHESTORESQLCP CACHESTORE_SQLCP
128 MEMOBJ_RESOURCESEINTERNALTLS MEMORYCLERK_SQLSTORENG
120 MEMOBJ_BLOBHANDLEFACTORYMAIN MEMORYCLERK_BHF
120 MEMOBJ_SNI MEMORYCLERK_SNI
88 MEMOBJ_QUERYNOTIFICATON MEMORYCLERK_SQLOPTIMIZER
72 MEMOBJ_HOST MEMORYCLERK_HOST
72 MEMOBJ_INDEXRECMGR MEMORYCLERK_SQLOPTIMIZER
64 MEMOBJ_RULETABLEGLOBAL MEMORYCLERK_SQLGENERAL
56 MEMOBJ_SERVICEBROKER MEMORYCLERK_SQLSERVICEBROKER
56 MEMOBJ_REMOTESESSIONCACHE MEMORYCLERK_SQLGENERAL
56 MEMOBJ_PARSE CACHESTORE_PHDR
48 MEMOBJ_CACHESTOREBROKERTBLACS CACHESTORE_BROKERTBLACS
48 MEMOBJ_APPENDONLYSTORAGEUNITMGR MEMORYCLERK_SQLSTORENG
40 MEMOBJ_SBASBMANAGER MEMORYCLERK_SQLSERVICEBROKER
32 MEMOBJ_OPTINFOMGR MEMORYCLERK_SQLOPTIMIZER
32 MEMOBJ_SBTRANSPORT MEMORYCLERK_SQLSERVICEBROKERTRANSPORT
32 MEMOBJ_CACHESTOREBROKERREADONLY CACHESTORE_BROKERREADONLY
32 MEMOBJ_DIAGNOSTIC MEMORYCLERK_SQLGENERAL
32 MEMOBJ_UCS MEMORYCLERK_SQLSERVICEBROKER
24 MEMOBJ_STACKSTORE CACHESTORE_STACKFRAMES
24 MEMOBJ_CACHESTORESXC USERSTORE_SXC
24 MEMOBJ_FULLTEXTGLOBAL MEMORYCLERK_FULLTEXT
24 MEMOBJ_APPLOCKLVB OBJECTSTORE_LOCK_MANAGER
24 MEMOBJ_FULLTEXTSTOPLIST CACHESTORE_FULLTEXTSTOPLIST
24 MEMOBJ_CONVPRI CACHESTORE_CONVPRI
16 MEMOBJ_SQLCLR_VMSPY MEMORYCLERK_SQLCLR
16 MEMOBJ_VIEWDEFINITIONS MEMORYCLERK_SQLOPTIMIZER
16 MEMOBJ_SBACTIVATIONMANAGER MEMORYCLERK_SQLSERVICEBROKER
16 MEMOBJ_AUDIT_EVENT_BUFFER OBJECTSTORE_SECAUDIT_EVENT_BUFFER
16 MEMOBJ_HASHGENERAL MEMORYCLERK_SQLQUERYEXEC
16 MEMOBJ_SBTIMEREVENTCACHE MEMORYCLERK_SQLSERVICEBROKER
16 MEMOBJ_ASYNCHSTATS MEMORYCLERK_SQLGENERAL
16 MEMOBJ_BADPAGELIST MEMORYCLERK_SQLUTILITIES
16 MEMOBJ_QSCANSORTNEW MEMORYCLERK_SQLQUERYEXEC
16 MEMOBJ_SCTCLEANUP MEMORYCLERK_SQLGENERAL
16 MEMOBJ_XP MEMORYCLERK_SQLXP
8 MEMOBJ_SECURITY MEMORYCLERK_SQLGENERAL
8 MEMOBJ_CACHESTOREBROKERRSB CACHESTORE_BROKERRSB
8 MEMOBJ_EXCHANGEXID MEMORYCLERK_SQLGENERAL
8 MEMOBJ_CACHESTOREVENT CACHESTORE_EVENTS
8 MEMOBJ_CACHESTOREXPROC CACHESTORE_XPROC
8 MEMOBJ_DBMIRRORING MEMORYCLERK_SQLUTILITIES
8 MEMOBJ_SERVICEBROKERTRANSOBJ CACHESTORE_BROKERTO
8 MEMOBJ_CACHESTOREOBJCP CACHESTORE_OBJCP
8 MEMOBJ_CACHESTOREXMLDBELEMENT CACHESTORE_XMLDBELEMENT
8 MEMOBJ_ENTITYVERSIONINFO MEMORYCLERK_SQLSTORENG
8 MEMOBJ_AUDIT_MGR MEMORYCLERK_SQLGENERAL
8 MEMOBJ_EXCHANGEPORTS MEMORYCLERK_SQLGENERAL
8 MEMOBJ_DEADLOCKXML MEMORYCLERK_SQLSTORENG
8 MEMOBJ_CACHESTORETEMPTABLE CACHESTORE_TEMPTABLES
8 MEMOBJ_HTTPSNICONTROLLER MEMORYCLERK_SQLHTTP
8 MEMOBJ_CACHESTOREVIEWDEFINITIONS CACHESTORE_VIEWDEFINITIONS
8 MEMOBJ_CACHESTOREPHDR CACHESTORE_PHDR
8 MEMOBJ_CACHESTOREXMLDBTYPE CACHESTORE_XMLDBTYPE
8 MEMOBJ_CACHESTORE_BROKERUSERCERTLOOKUP CACHESTORE_BROKERUSERCERTLOOKUP
8 MEMOBJ_EVENTSUBSYSTEM MEMORYCLERK_SQLGENERAL
8 MEMOBJ_CACHESTOREBROKERDSH CACHESTORE_BROKERDSH
8 MEMOBJ_SOSDEADLOCKMONITORXMLREPORT MEMORYCLERK_SQLSTORENG
8 MEMOBJ_CACHESTOREXMLDBATTRIBUTE CACHESTORE_XMLDBATTRIBUTE
8 MEMOBJ_CACHESTOREBROKERKEK CACHESTORE_BROKERKEK
8 MEMOBJ_QPMEMGRANTINFO MEMORYCLERK_SQLQUERYEXEC
8 MEMOBJ_CACHESTOREQNOTIFMGR CACHESTORE_NOTIF
(101 row(s) affected)
David -
Hi,
We seem to get this error through SCOM every couple of weeks. It doesn't correlate with the AV updates, so I'm not sure what's eating up the memory. The server has been patched to the latest roll up and service pack. The mailbox servers
have been provisioned sufficiently with more than enough memory. Currently they just slow down until the databases activate on another mailbox server.
A significant portion of the database buffer cache has been written out to the system paging file.
Any ideas?I've seen this with properly sized servers with very little Exchange load running. It could be a number of different things. Here are some items to check:
Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
Confirm that the Windows OS is running the recommended hotfixes. Here is an older post that might still apply to you
http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
http://support.microsoft.com/kb/2699780/en-us
Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more. Use the PAL tool to collect and analyze the perf data -
http://pal.codeplex.com/
Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
Be sure that the disk are properly aligned -
http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
Check that the network is properly configured for Exchange server. You might be surprise how the network config can cause perf & scom alerts.
Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
Be sure that hyperthreading is NOT enabled -
http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
Check that there are no hardware issues on the server (RAM, CPU, etc). You might need to run some vendor specific utilities/tools to validate.
Proper paging file configuration should be considered for Exchange servers. You can use the perfmon to see just how much paging is occurring.
These will usually lead you in the right direction. Good Luck! -
This was discussed here, with no resolution
http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
I have the same issue. This is a single-purpose physical mailbox server with 320 users and 72GB of RAM. That should be plenty. I've checked and there are no manual settings for the database cache. There are no other problems with
the server, nothing reported in the logs, except for the aforementioned error (see below).
The server is sluggish. A reboot will clear up the problem temporarily. The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each. Does anyone have
any ideas on this?
Warning ESE Event ID 906.
Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)Brian,
We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
for the sole purpose of serving as our public folder servers.
So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
cache flush to paging file, we got the following alert:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:14 AM
Event ID: 17012
Task Category: Storage
Level: Error
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
Followed by:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:15 AM
Event ID: 17106
Task Category: Storage
Level: Information
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:13:50 AM
Event ID: 17102
Task Category: Storage
Level: Warning
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action. This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
actions.
Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
Thanks! -
Deleting the C:\System Recovery Files folder
In 2012 someone already posted this and I have the same problem. Can't anyone solve this? How about someone from HP or Microsoft. Problem deleting the C:\System Recovery Files folder 01-08-2012 10:37 AM My computer, a HP-Pavillion desktop, model RK575AA-ABA a1740n running Windows Vista Home Premium, was getting more and more sluggish. One morning, when booting, I pressed F11 and clicked on "Backup and Recovery". The process took a while and resulted in a MINWINPC folder on my backup drive containing my C: drive in compressed form. The rest of the process was to take the computer back to its previous capabilities: formatting, installating the factory image of the system and installating my programs. Once this was completed, I decompressed the MINWINPC and it deposited all of the files from my previous C: drive into a "System Recovery Files" folder on my C: drive. From that folder, I copied and pasted the users data.
The C:\System Recovery Files folder is still on my C: drive, it has served its purpose and is not needed any longer. I tried to delete it. I took administrator priviledges and right-clicked the folder and selected the "Delete" command. Surprise! Access was denied and I cancelled the command. I tried a lot of tricks gathered by a search on the Web (take ownership from the virtual menu properties and Security panel, use TakeOwnership and Unlocker) to no avail though I had very little problem deleting users data and all the files resulting from the installation of applications. Deleting the AppData folder required more patience but there was no way to delete sub-folders of Program Files containing Window programs (Internet Explorer, Microsoft Security Client, Windows Calendar, ...). An attempt to use Unlocker even resulted in a BSOD indicating a last ditch effort by Windows to save its integrity.
What can I do?Unless this folder is excessive in size, why be worried? If you absolutely have to remove the folder, you could do it outside of Windows, since windows will not let it be done. My choice is to get a Linux Live, make the CD to load it and then reboot to the Linux by booting to the Linux Live CD. It will not alter the Windows install but will allow the user to remove or write whatever they want to the hard drive. I have done this many times. To use a user friendly Distro look for Linux Mint or Ubuntu.Be cautioned that bad things can happen to any other files if any are accidentally altered or deleted will using this method. The risk however is yours, depending on how much this Recovery file folder is annoying to you.
-
One JVM with different system.out / system.err files
I have a menu application which allows me to launch different swingapps which runs inside one single JVM.
All applications have their own properties.file which is read and handled at the start of each application.
One property allows me to print all kind of system.err / system.out which i want to redirect to a specific file. This is implemented with following code:
if (isTRACE_ENABLED()){
try {
setTrace_out_log(new PrintStream(
new BufferedOutputStream(
new FileOutputStream(props.getProperty("TRACE_OUTLOG_FILE")),128), true));
System.setOut(getTrace_out_log());
setTrace_err_log(new PrintStream(
new BufferedOutputStream(
new FileOutputStream(props.getProperty("TRACE_ERRLOG_FILE")),128), true));
System.setErr(getTrace_err_log());
} catch(IOException e) {
e.printStackTrace();
}This works file but... all system.out and system.err is redirected to same file... which is not what i want.
Example:
debug property for menu application = enabled
debug property for app 1 = disabled
debug property for app 2 = enabled
In above case i want to have 4 new files:
- menuapp_out
- menuapp_err
- app2_out
- app2_err
This doesn't work, the files are created but after starting app2, the print-statements for the menu application are alse redirected to the app_2_xxx files. And when i finish app2, i do not get any print-output anymore
IMHO this is because the JVM only has 1 system.out and 1 system.err file. Is there some way to solve this?I understand that i need to use java.util.logger (JUL) or Log4j
Are there any (free) tools availabe to read/analyze the logfiles created by above tools? -
System stats - files and memory
Please accept my apology if this is the wrong forum!
Is there a way - system calls eg. - to tell how much memory and how many files a process or thread uses? I know I can use /proc to find info about file handles, but I expect there must be a place where the system keeps this information?
I want to be able to tell from within the program itself - I'm using third party libraries, and apparently they allocate inordinate amounts of resources without freeing them; and they are not about to admit the fault is in their code, so I have to see which resources are in use before and after calling their code.Please accept my apology if this is the wrong forum!Yeah, this is Solaris OS question :-)
Is there a way - system calls eg. - to tell how much
memory and how many files a process or thread uses? I
know I can use /proc to find info about file handles,Ugh, what's wrong with it?
but I expect there must be a place where the system
keeps this information? It's kept by kernel. The only interface you can see it is prosfs
or proc tools (man -s1 proc), but these tools query /proc
anyway, -
System data file is almost full around 17G
Hi,
I have put audit in my database.So, because of the more data in sys.aud$ table my system data file almost reached 17g..
can i reduce size of the system data file?
what is the solution? I am using 10g release 2.
Thanks in advance.RMAN is a Physical Backup-Restore method. It will restore and recreate the datafiles to be the same on-disk size as they were at the time of the backup.
Export-DropDatabase-CreateEmptyDatabase-Import is used to "shrink" databases if there is a lot of wasted space.
Normally for non-SYSTEM tablespaces, it is simpler to create a new tablespace and move data into the new tablespace and then drop the older tablespace so as to not have to use Export-Drop-Create-Import.
However, you cannot use this method for SYSTEM so you'd have to Export-Drop-Create-Import as the SYSTEM tablespace is your issue.
Make sure that you have practiced this method and verifed that there are no issues with database accounts, schemas, privileges, database links, indexes, datatypes, storedprocedures etc after you import into a test database before you try this on production.
Hemant K Chitale -
Aperture DELETES System ID file
I recently switched from one MacBook at work to another. The first was giving me trouble and we want to have it looked at. I moved the hard drive from the original to the new machine so that I could keep working. All seemed fine until I ran Aperture for the first time. Aperture asked for my serial number. Since I didn't have it handy, I quit. Aperture deleted the "Aperture System ID" file from the support directory. Now I can't find my printed serial number and Apple's software has removed it from my system. I contacted Apple support and they were NO HELP.
Clearly there could be a better way for Aperture to handle this. How about just refusing to run and NOT deleting the file. I could have eventually moved back to the other MacBook, not anymore.... In an effort to prevent copying their software, Apple has ****** off a paying customer.
If I need to buy something again, it will be Lightroom not Aperture.If you're trying to launch on a machine with Intel processor, you'll need the universal binary version of FCP. FCP 3 is not Universal Binary and will not run on your MacBook Pro.
-Takayasu -
Display system security file fragmentation_percent
Hi - I'm trying to run the MAXL command to display how fragmented my security file is but in the log, I keep getting this error message about 'Syntax error near security'. I am on Essbase 7.0
OK/INFO - 1051034 - Logging in user planning.
OK/INFO - 1051035 - Last login on Monday, March 01, 2010 11:59:39 AM.
OK/INFO - 1241001 - Logged in to Essbase.
MAXL> display system security file fragmentation_percent;
ERROR - 1242021 - (1) Syntax error near ['security'].
MAXL> alter application 'UAT2NYPF' enable connects;
OK/INFO - 1056013 - Application UAT2NYPF altered.
MAXL> logout;
User planning is logged out
MaxL Shell completedIs there anything that can be done, in terms of maintenance, for the security file for Essbase 7.0?
Over the weekend, our server guys rebooted the Essbase server, essbase service never came up this morning. I launched essbase in the command prompt and it gave me an error message about bad security file.
I replaced the security file from a backup, but if these corruptions can be avoided by doing maintenance on the security file, then I am looking for some ways to automate that maintenance process on a bi weekly basis.
Thanks
Edited by: CLAU on Mar 1, 2010 9:56 AM
Edited by: CLAU on Mar 1, 2010 10:10 AM -
How to clean large system log files?
I believe that OS X saves a lot of system data in log files that become very large.
I would like to clear old history logs.
How may I view and clean system log files?Thank you Niel.
I have obtained the list at /private/var/log.
There are a lot of files in there.
Since I am not familiar with functions of these files should I be concerned that by simply deleting all of the files in folder /private/var/log will not cause any problems? Would this action present some unintended consequences?
Maybe you are looking for
-
Well, before starting, let me say that I only have a few hairs left on my head, I pulled them all out in the last 36 hours. I'm all out of options and you seem my last hope. -- skip if you don't want details -- You see, I got this card, with a box
-
Access non-DC project from a DC Web Dynpro project
Hi, I have a DC Web Dynpro project that needs to access a non-DC Java project. As I cannot create a public part of the non-DC Java project then add that public part to the used DC list of the DC Web Dynpro project, I add the non-DC Java project to th
-
Itunes could not back up ipod touch on my windows 7 pc
New Itouch 4th generation able to sync but not back up nor can it update to the latest version 4.3.5 Using a pc with Windows 7 (64bit). Anyone else having this issue?
-
[SOLVED] Major weird issues with KDE/Qt apps and unicode
Hi, I've tried to search for my problem, but nothing has showed up, so if this has already been answered, please forgive me. Anyway, I use KDE as my primary desktop environment with SKIM as my unicode input method. After installing SKIM and Japanese
-
Hi, I tried the following piece of codes (PrintingExample from Barbecue), but got a blank paper printed out. What is my problem? import java.awt.print.*; import net.sourceforge.barbecue.*; * Print a barcode using Java's print API * @author Sean C. Su