Sync Services Log file very large
user>library>application support>sync services>local
My Sync Services log file was over 111 Gb. It appears to grow in size every day. I have reset SyncServices per Apple. http://support.apple.com/kb/TS1627
I even deleted the contents of the local folder (not recommended per apple, but seems fine) and still my syncservices.log grows and is taking over my hard drive.
I use Entourage and Sync Services is used to Sync iCal/Address book (then for my iPhone sync).
Any suggestions to stop this out of control log?!?
Hi,
Increasing the size of the redo logfile size may reduce the number of switches as it takes time to fill the file up but your system's redo generatin will stil be the same. Reduce the frequent commits.
Use the following notes to further narrow down the possible root cause.
WAITEVENT: "log file sync" Reference Note [ID 34592.1]
WAITEVENT: "log file parallel write" Reference Note [ID 34583.1]
Similar Messages
-
Hi All,
our BPC log files are very large compared to the data files. our
we have separate disks for data and log files:
DATA D:\ = 550GB total
LOG E:\ = 278GB total
D:\ICTSI_BPC.mdf = 5.6GB
E:\ICTSI_BPC_log.ldf = 185GB
is this correct? i have read that Log file should roughly be 25% of the
size of the total amount of data. Example: a 4GB database should have a
1GB log file. How can we adjust the size of the SQL log file?
we are using a multiserver set up, SAP BPC 7 SP04 (32bit)
MS SQL 2008 Server (64 bit)
thanks in advance!Hi Jeb,
Do you already backup the database from the SQL Server ? Because in my server there is once when the log file is bigger then my database and that's because the backup schedule is not working.
So what i do is , do the backup manual from the sql server and the log file will shrink automatic.
After that , i create the automatic backup schedule again.
Hope this information helps.
Suprapto -
Where yo find reporting services log files in sharepoint 2013 integrated mode
i use to find the reporting services log files in the msrs folder in sharepoin 2010 integrated mode. but when we migrated to sharepoint 2013. i am not able to find the logs.i wanted to check all the executed sql statement using the logs as we don't have
access to sql server profiler. does anybody know the location of yhe logs?Hi there,
Not sure you'll see the sql statements however you can find information on the ULS logs here:
http://technet.microsoft.com/en-us/library/cc627510(v=sql.105).aspx
http://technet.microsoft.com/en-us/library/ff487871(v=sql.105).aspx
and here:
http://technet.microsoft.com/en-us/library/ms156500(v=sql.105).aspx
You might want this once ULS log is capturing info.
http://archive.msdn.microsoft.com/ULSViewer
cheers,
Andrew
Andrew Sears, T4G Limited, http://www.performancepointing.com -
What determines the location of the shared services log files
I've just completed an install and configuration of Shared Services 9.2.1. The shared services log files, SharedServices_Security.log, SharedServices_Admin.log, etc., are being written to the C:\WINDOWS\system32\null\logs\SharedServices9 path. These files should be written to the Tomcat application folders in the Shared Services home, CSS_HOME, folder. e.g. d:\hyperion\sharedservices\9.2\appserver\installedapps\tomcat\5.0.28. This is according to the Shared Services installation documentation.
Is there any way to get these log files written to the d: drive? The following references sharedservices.logdir from the hsslogger.properties file but I don't see where a value is set that I can change.
log4j.appender.FILE.File=${sharedservices.logdir}${file.separator}SharedServices_Security.log
Thanks,
TomHi there!
Are you looking for the AOM log file?
By default it´s located at:
SIEBEL_ROOT\ENTERPRISE\SIEBEL_SERVER\log
If you can´t find the AOM log files here, try to check the "Log directory" server parameter value.
Best regards,
João Paulo -
Log file sync vs log file parallel write probably not bug 2669566
This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
Version : 9.2.0.8
Platform : Solaris
Application : Oracle Apps
The number of commits per second ranges between 10 and 30.
When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
Below just 2 samples where the ratio is even about 20.
"snap_time" " log file parallel write avg" "log file sync avg" "ratio
11/05/2008 10:38:26 8,142 156,343 19.20
11/05/2008 10:08:23 8,434 201,915 23.94
So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
First I thought that I was hitting bug 2669566.
But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
And I think that it proves that I am NOT hitting this bug.
Below is a sample of the output for the log writer.
-- End of snap 3
HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
In the example above 781036 + 210432 = 991468 micro seconds.
This is the case for all the snaps taken by snapper.
So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
Any clues?Yes that is true!
But that is the way I calculate the average wait time = total wait time / total waits
So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
I use the query below:
select snap_id
, snap_time
, event
, time_waited_micro
, (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
, total_waits
, (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
, trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
from (
select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
row_number() over (partition by event order by sn.snap_id) r
from perfstat.stats$system_event se, perfstat.stats$snapshot sn
where se.SNAP_ID = sn.SNAP_ID
and se.EVENT = 'log file sync'
order by snap_id, event
where time_waited_micro - p_time_waited_micro > 0
order by snap_id desc; -
Mmp services log files rotation
Hi,
Sun Java(tm) System Messaging Server 6.3-5.02 (built Oct 12 2007; 32bit)
libimta.so 6.3-5.02 (built 17:15:31, Oct 12 2007; 32bit)
SunOS mta01 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-V240
I'm trying to find out what setting controls mmp services log file rotation and how to change it. Presently, it appears rotation take place daily and files are kept forever.
ImapProxy_<date>.log
AServices_<date>.log
I would prefer to only keep these files for a week or so.
Should I configure:
local.schedule.prune_mmp = "45 23 * * * /usr/bin/find /var/opt/SUNWmsgsr/log -name ImapProxy\* -atime +3 -exec rm {} \; "
local.schedule.prune_mmp.enable = 1
Thanks.d-v-k wrote:
I'm trying to find out what setting controls mmp services log file rotation and how to change it. Presently, it appears rotation take place daily and files are kept forever.There is no inbuilt Log Rotation mechanism for the MMP logs.
ImapProxy_<date>.log
AServices_<date>.log
I would prefer to only keep these files for a week or so.
Should I configure:
local.schedule.prune_mmp = "45 23 * * * /usr/bin/find /var/opt/SUNWmsgsr/log -name ImapProxy\* -atime +3 -exec rm {} \;
local.schedule.prune_mmp.enable = 1This is definitely one way to prune the log files. I would use the following find string instead:
{code}
/usr/bin/find /var/opt/SUNWmsgsr/log/ -name 'ImapProxy_*.log' -mtime +6 -exec rm {} \; "
You would need to write/enable a similar rule to prune the AServices_*.log files as well.
Regards,
Shane. -
'log file sync' versus 'log file prallel write'
I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
SELECT time_waited,
total_waits
INTO wait_start_lgwr,
wait_start_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
SELECT time_waited,
total_waits
INTO wait_end_lgwr,
wait_end_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
I did the same thing for LFS.
What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
Can anybody tell me what I am missing?
P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s.I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
SELECT time_waited,
total_waits
INTO wait_start_lgwr,
wait_start_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
SELECT time_waited,
total_waits
INTO wait_end_lgwr,
wait_end_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
I did the same thing for LFS.
What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
Can anybody tell me what I am missing?
P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s. -
LMS 4.0.1 ani log file too large
My LMS 4.0.1 platform is installed since several days and I have a very large ani.log file (> 300 MB after 4 days of daemons running).
In this file, I saw many Ani Discovery errors or warnings like for example:
2011/11/28 11:08:46 Discovery ani WARNING DcrpSMFUDLDDisabledOnPorts: Unable to fetch device details for container(Device,10.11.101.241 hostname: 10.11.101.241)
2011/11/28 11:08:46 Discovery ani WARNING DcrpSMFUDLDDisabledOnPorts: Unable to fetch device details for container(Device,10.74.101.245 hostname: 10.74.101.245)
2011/11/28 11:08:46 Discovery ani ERROR DcrpSMFPortBPDUFilterDisabled: Unable to get span tree device info for the devicecontainer(Device,10.14.101.5 hostname: 10.14.101.5)
2011/11/28 11:08:46 Discovery ani ERROR DcrpSMFPortBPDUFilterDisabled: Unable to get span tree device info for the devicecontainer(Device,10.3.101.12 hostname: 10.3.101.12)
2011/11/28 11:18:45 Discovery ani WARNING DcrpSMFCDPAccessPort: Unable to get CDP information for the devicecontainer(Device,10.1.101.17 hostname: 10.1.101.17)
2011/11/28 11:18:45 Discovery ani WARNING DcrpSMFCDPAccessPort: Unable to get CDP information for the devicecontainer(Device,10.1.101.9 hostname: 10.1.101.9)
2011/11/28 11:18:45 Discovery ani WARNING DcrpSMFSTPBackboneFast: Unable to get span tree device info for the devicecontainer(Device,10.1.101.85 hostname: 10.1.101.85)
2011/11/28 11:18:45 Discovery ani WARNING DcrpSMFSTPBackboneFast: Unable to get span tree device info for the devicecontainer(Device,192.168.12.51 hostname: 192.168.12.51)
2011/11/28 11:25:11 EvalTask-background-41 ani ERROR StpSMFGetStpInstance: unable to get stp device information
These errors are not focused on specific devices (many devices are concerned).
However, all seems to be working fine on the platform (layer2 maps, data collection, inventory, config backup, UT, DFM, ...).
For informations, recently, I was in contact with TAC because Data Collection was always in running state.
They provided a new PortDetailsXml.class file to replace the original one.
It has fixed the problem.
I suspect now that ani database could be corrupted and need to be reinitialized.
I would to be sure of that and if possible to avoid this solution.
Thanks for your help.Hi ,
Found some errors and exception in the log..
We need to follow the below steps to fix the issue::
-we need to re-initialize the ANIdatabase ::
1.stop the daemon manager :
/etc/init.d/dmgtd stop
net stop crmdmgtd
2.Go to : /opt/CSCOpx/bin/ and issue the command :
Run the below command: /opt/CSCOpx/bin/perl dbRestoreOrig.pl dsn=ani dmprefix=ANI
windows::
NMSROOT\bin\perl.exe NMSROOT\bin\dbRestoreOrig.pl dsn=ani dmprefix=ANI
3.Start the Daemon manager ::
/etc/init.d/dmgtd start
net start crmdmgtd
***IMP*** Re-initialize the ANI database will not lose any of the history information of the device. Also ANI database does not contain any historical information as soon as the above steps are complete you need to run and new DATA COLLECTION followed by User tracking Acquisition and then check the issue.
Data collection:: Go to Admin > Collection Settings > Data Collection > Data Collection Schedule under "start Data Collection" > For "All Device" >> click "START"
User tracking :: Inventory > User Tracking Settings > Acquisition Actions
hope it will help
Thanks-
Afroz
***Ratings Encourages Contributors **** -
In Shared services, Log Files taking lot of Disk space
Hi Techies,
I have a question. Its like the Logs in BI+ in Shared Service Server is taking lot of Disk space about 12 GB a day.
Following files are taking more space
Shared Service-Security-Client log ( 50 MB )
Server-message-usage Service.log ( About 7.5 GB )
why this is happening. Any suggestions to avoid this.
Thanks in Advance,
SonuHi Techies,
I have a question. Its like the Logs in BI+ in Shared Service Server is taking lot of Disk space about 12 GB a day.
Following files are taking more space
Shared Service-Security-Client log ( 50 MB )
Server-message-usage Service.log ( About 7.5 GB )
why this is happening. Any suggestions to avoid this.
Thanks in Advance,
Sonu -
[SOLVED]Log files getting LARGE
I ran a pacman -Syu for the first time in several months yesterday, and my computer has become almost useless due the fact that everything.log, kernel.log and messages.log gets extremely large (3.8 GB) after a while, causing / to become 100% full.
I've located the following in kernel.log:
Mar 14 15:06:44 elvix attempt to access beyond end of device
Mar 14 15:06:45 elvix attempt to access beyond end of device
Mar 14 15:06:45 elvix sda5: rw=0, want=1812442544, limit=412115382
Mar 14 15:06:45 elvix attempt to access beyond end of device
Mar 14 15:06:45 elvix sda5: rw=0, want=1812442544, limit=412115382
Mar 14 15:06:45 elvix attempt to access beyond end of device
Not sure what it means, but the last two lines are repeated XX times and are the reason why log files grow beyond limits. Anyone got ideas to what can be done to fix this?
Last edited by bistrototal (2008-03-14 16:27:15)logrotate works really well:
http://www.archlinux.org/packages/14754/
There's quite a few threads about configuration floating around. -
Hi Experts,
I was wondering if there is any way to read/access the log file which is generated while making a web services call . If yes then How??
Thanks & Regards
SablokHi Experts,
I was wondering if there is any way to read/access the log file which is generated while making a web services call . If yes then How??
Thanks & Regards
Sablok -
Sharepoint 2013 - WSS content LOG file very very big
Hi there.
Sharepoint 2013 with latest updates.
WSS CONTENT database size is cca 110GB, LOG FILE is 90GB.
Even if we did FULL backup of databes that does not decrease LOG file size.
Any hints how to lower the size of LOG data file?
With best regards
bostjancHi Bostjan,
I wonder whether the issue can be resolved by shrinking log file.
Shrinking data files recovers space by moving pages of data from the end of the file to unoccupied space closer to the front of the file. When enough free space is created at the end of the file, data pages at end of the file can deallocated and returned
to the file system.
Here is the reference for Shrink a File in SQL 2012 for your convenience:
https://technet.microsoft.com/en-us/library/ms190757(v=sql.110).aspx
Regards,
Rebecca Tu
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected] -
System.log file are large, and cause a kernel panic when deleting
Hello everyone,
I am trying to solve a dillema. I discovered that over 60GB of my 120GB SSD drive are taken up by system.log files. All 3 are around 20GB. I left my laptop running so none of them are the current one. They are system.log.0, system.log.3 and system.log.4. I have tried several ways to delete them:
Using Disksweeper
At a terminal prompt using sudo rm
At a terminal prompt using sudo su - (to get root access) and then rm -f ....
Rebooting the system in single user mode, mounting the drive and then attempting to delete them
Each time I do, the system reboots with a kernel panic. I have not looked at the panic log yet, but can post that if you need to.
I did look at the end of the current log file to discover there were some issues with programs, and I have fixed it so that these size log files "shouldn't" be generated anymore.
Looking for advice as to how to delete these files! This is driving me nuts. It shouldn't be this hard.
Thanks for any help or insight you can give.OK, I actually was able to solve this myself. My problem was that ther kernel panic was caused by the journaling. So here is what I had to do:
I temporarily disabled journaling, see here: http://forums.macrumors.com/showthread.php?t=373067
Then I was able to delete the files by starting a terminal session and using sudo rm to delete the files.
Re-enabled journaling (I don't know if it is needed, but it was on before).
Hope this can help someone else in the future. -
Need Help With Sync problesm Log File Below Thanks
Win XP
Palm Desktop latest ver DL two days ago and installed
Treo 755P
HotSync operation started for TerryCano on 01/06/09 10:33:20
Quick Install - Sync configured to Do Nothing.
OK Memos
Calendar synchronization failed
Contacts synchronization failed
Tasks synchronization failed
Protocol Error: Handheld file could not be opened. (4004)
-- Backing up Messages Database to C:\Program Files\Palm\TerryC\Backup\Messages_Database.PDB
-- Backing up psysLaunchDB to C:\Program Files\Palm\TerryC\Backup\psysLaunchDB.PDB
-- Backing up PmTraceDatabase to C:\Program Files\Palm\TerryC\Backup\PmTraceDatabase.PDB
-- Backing up Saved Preferences to C:\Program Files\Palm\TerryC\Backup\Saved_Preferences.PRC
OK Backup
HotSync operation completed on 01/06/09 10:33:25
HotSync operation started for TerryCano on 01/06/09 09:51:11
OK Quick Install
OK Memos
Calendar synchronization failed
Contacts synchronization failed
Tasks synchronization failed
Protocol Error: Handheld file could not be opened. (4004)
-- Backing up AddressingLibRecent-HsCh to C:\Program Files\Palm\TerryC\Backup\AddressingLibRecent-HsCh.PDB
-- Backing up AddressingLibRecent-HsCi to C:\Program Files\Palm\TerryC\Backup\AddressingLibRecent-HsCi.PDB
-- Backing up Blazer Bookmarks to C:\Program Files\Palm\TerryC\Backup\Blazer_Bookmarks.PDB
-- Backing up EAContentDB to C:\Program Files\Palm\TerryC\Backup\EAContentDB.PDB
-- Backing up CarrierProfiles2 to C:\Program Files\Palm\TerryC\Backup\CarrierProfiles2.PDB
-- Backing up locLDefLocationDB to C:\Program Files\Palm\TerryC\Backup\locLDefLocationDB.PDB
-- Backing up ConnectionMgr50DB to C:\Program Files\Palm\TerryC\Backup\ConnectionMgr50DB.PDB
-- Backing up NetworkDB to C:\Program Files\Palm\TerryC\Backup\NetworkDB.PDB
-- Backing up locLCusLocationDB to C:\Program Files\Palm\TerryC\Backup\locLCusLocationDB.PDB
-- Backing up HolUDB to C:\Program Files\Palm\TerryC\Backup\HolUDB.PDB
-- Backing up MyTreoBonusTO-D0600549VZ to C:\Program Files\Palm\TerryC\Backup\MyTreoBonusTO-D0600549VZ.PDB
-- Backing up HSTraceDatabase to C:\Program Files\Palm\TerryC\Backup\HSTraceDatabase.PDB
-- Backing up Messages Database to C:\Program Files\Palm\TerryC\Backup\Messages_Database.PDB
-- Backing up NetworkProfiles2 to C:\Program Files\Palm\TerryC\Backup\NetworkProfiles2.PDB
-- Backing up PocketTunesPL-TNpt Def PL to C:\Program Files\Palm\TerryC\Backup\PocketTunesPL-TNpt_Def_PL.PDB
-- Backing up SMS QuickText to C:\Program Files\Palm\TerryC\Backup\SMS_QuickText.PDB
-- Backing up SndFile Ring Tones to C:\Program Files\Palm\TerryC\Backup\SndFile_Ring_Tones.PDB
-- Backing up MSLSA Data to C:\Program Files\Palm\TerryC\Backup\MSLSA_Data.PDB
-- Backing up Addr Category Ringtones to C:\Program Files\Palm\TerryC\Backup\Addr_Category_Ringtones.PDB
-- Backing up Blazer URL Autofill to C:\Program Files\Palm\TerryC\Backup\Blazer_URL_Autofill.PDB
-- Backing up Blazer Field Autofill to C:\Program Files\Palm\TerryC\Backup\Blazer_Field_Autofill.PDB
-- Backing up GoogleMaps to C:\Program Files\Palm\TerryC\Backup\GoogleMaps.PRC
-- Backing up Holidates3 to C:\Program Files\Palm\TerryC\Backup\Holidates3.PRC
-- Backing up ClientObj to C:\Program Files\Palm\TerryC\Backup\ClientObj.PRC
-- Backing up CheckSyncClient to C:\Program Files\Palm\TerryC\Backup\CheckSyncClient.PRC
-- Backing up AddIt to C:\Program Files\Palm\TerryC\Backup\AddIt.PRC
-- Backing up BFVaultDatabase to C:\Program Files\Palm\TerryC\Backup\BFVaultDatabase.PDB
-- Backing up PhoneCallDB to C:\Program Files\Palm\TerryC\Backup\PhoneCallDB.PDB
-- Backing up Cookie Data to C:\Program Files\Palm\TerryC\Backup\Cookie_Data.PDB
-- Backing up AddressCitiesDB to C:\Program Files\Palm\TerryC\Backup\AddressCitiesDB.PDB
-- Backing up AddressCompaniesDB to C:\Program Files\Palm\TerryC\Backup\AddressCompaniesDB.PDB
-- Backing up AddressCountriesDB to C:\Program Files\Palm\TerryC\Backup\AddressCountriesDB.PDB
-- Backing up BtExgLibDB to C:\Program Files\Palm\TerryC\Backup\BtExgLibDB.PDB
-- Backing up PhoneFavorites2DB to C:\Program Files\Palm\TerryC\Backup\PhoneFavorites2DB.PDB
-- Backing up Blazer Find Autofill to C:\Program Files\Palm\TerryC\Backup\Blazer_Find_Autofill.PDB
-- Backing up CalendarLocationsDB-PDat to C:\Program Files\Palm\TerryC\Backup\CalendarLocationsDB-PDat.PDB
-- Backing up SLEventNotifications to C:\Program Files\Palm\TerryC\Backup\SLEventNotifications.PRC
-- Backing up psysLaunchDB to C:\Program Files\Palm\TerryC\Backup\psysLaunchDB.PDB
-- Backing up Graffiti ShortCuts to C:\Program Files\Palm\TerryC\Backup\Graffiti_ShortCuts.PRC
-- Backing up PmTraceDatabase to C:\Program Files\Palm\TerryC\Backup\PmTraceDatabase.PDB
-- Backing up MIDI Ring Tones to C:\Program Files\Palm\TerryC\Backup\MIDI_Ring_Tones.PDB
-- Backing up System MIDI Sounds to C:\Program Files\Palm\TerryC\Backup\System_MIDI_Sounds.PDB
-- Backing up Saved Preferences to C:\Program Files\Palm\TerryC\Backup\Saved_Preferences.PRC
-- Backing up AddressStatesDB to C:\Program Files\Palm\TerryC\Backup\AddressStatesDB.PDB
-- Backing up AddressTitlesDB to C:\Program Files\Palm\TerryC\Backup\AddressTitlesDB.PDB
-- Backing up Queries to C:\Program Files\Palm\TerryC\Backup\Queries.PDB
-- Backing up CSRecordOutDatebookDB to C:\Program Files\Palm\TerryC\Backup\CSRecordOutDatebookDB.PDB
-- Backing up CSRecordOutEAContentDB to C:\Program Files\Palm\TerryC\Backup\CSRecordOutEAContentDB.PDB
-- Backing up VpadDB to C:\Program Files\Palm\TerryC\Backup\VpadDB.PDB
-- Backing up CalculusDB to C:\Program Files\Palm\TerryC\Backup\CalculusDB.PDB
OK Backup
HotSync operation completed on 01/06/09 09:51:32
Post relates to: Treo 755p (Verizon)
Post relates to: Treo 755p (Verizon)Using the Forum's Search function (I used simply protocol error 4004) will yield many hits, including this recent thread. Hope that fixes it.
Post relates to: None
smkranz
I am a volunteer, and not an HP employee.
Palm OS ∙ webOS ∙ Android -
Change size hyperion shared services log file
Hello Experts,
I´d like to change (decrase) the size of the sharedservices_security.log.
There are always 5 files sharedservices_security.log + sharedservices_security.log.2 - sharedservices_security.log.5 (size of one file is 51,201 kb).
How can I change the sharedservices_security.log size?
We have EPM 11.1.1.3 (Essbase / HFM / Planning) on WINServer2003.
Thanks a lot
Regards
kforHave a look at <HYPERION_HOME>\deployments\<app_server>\SharedServices9\config\HSSLogger.properties
Look at the MaxFileSize parameter for each log.
Restart web app if you make any changes
Cheers
John
http://john-goodwin.blogspot.com/
Maybe you are looking for
-
Ipod to TV, Is there a way to get picture quality ?
I have the dock and av cables. I played a video that was downloaded from itunes. The picture is distorted. Does anyone know how to get better picture quality?
-
[ Previous] [Next ] and Sort Order
Is there a way to make HTMLDB generate report and edit form in such way that [< Previous] [Next>] sequence on follows the reports SORT ORDER instead of PK ? e.g. a report on EMP and user sort on HIREDATE... so when user gets linked to Edit Form page
-
How do you create 2 Apple libraries
How do you create 2 apple libraries?
-
*** How do I put a program on my local network for all to use?
I would like to put a program we have downloaded for our accunting on our office network so that either of our 3 employees can access it and add the most recent information. Is there a way to do this through the network folder in the finder? I can se
-
Weblogic 10.3.x 64 bit download for windows 64-bit
Hi, Does anyone know where I can find a true 64 bit download of weblogic 10.3.x for windows 2008 64 bit version. The version I could download was a 32 bit version even though it said it was for 64 bit windows. Thanks Ram