MoPad Log In Question
Hello guys! first of all thanks for the help, I find this MoPad very useful for many purposes, I read about it and went creating a team site in the https://etherpad.mozilla.org/ site, confirmed it but cant access that site I created, when I use the chosen URL it redirects me to http://etherpad.org/, so, is MoPad not working properly or I misunderstand the concept of it? I want a team site to be able to gather with other people and edit texts,
All best and again thanks in advance.
This is Firefox support forum, if you need regarding etherpad or other mozilla service, you can catch us on irc
* irc://irc.mozilla.org/it
use mibbit if you dont have irc client. Hope someone can assist you..Good Luck
Similar Messages
-
iTunes/app/log in question. I changed my email address and pw for iTunes. On my iPhone 5, When I try update my apps, it asks me to log in and lists my old
Em address. How do I update my phone or get it to recognize the new email addy?I tried syncing my phone. No change.
On my phone, I went to iTunes, logged out of my old em address, logged in with my new email address and it says we need to verify a few things first. It takes me to a blank "acct settings screen"??? And that's it?? -
Materialized view log update question
Hi, I am running into a question regarding mview - not sure if it should behave that way or I didn't use it right.
I have two base tables, sales and customers (drived from nested materialized view example in oracle doc):
CREATE TABLE sales
cust_ID VARCHAR2(32 BYTE) NOT NULL,
amount_sold NUMBER,
TEMP VARCHAR2(100 BYTE),
CONSTRAINT sales_pk PRIMARY KEY (cust_id)
CREATE TABLE customers
cust_ID VARCHAR2(32 BYTE) NOT NULL,
CUST_LAST_NAME VARCHAR2(100 BYTE),
TEMP VARCHAR2(100 BYTE),
CONSTRAINT cust_pk PRIMARY KEY (cust_id)
CREATE MATERIALIZED VIEW LOG ON sales
WITH ROWID (cust_id, amount_sold);
CREATE MATERIALIZED VIEW LOG ON customers
WITH ROWID (cust_id, cust_last_name);
Then I create a fast refresh materialized view based on them:
CREATE MATERIALIZED VIEW join_sales_cust
REFRESH FAST ON DEMAND AS
SELECT c.cust_id, c.cust_last_name, s.amount_sold, s.rowid srid, c.rowid crid
FROM sales s, customers c
WHERE s.cust_id = c.cust_id;
Since this materialized view only invole cust_id and amount_sold from sales and cust_id and last_name from customers table, I do not want to trigger materialized view log entry if the TEMP column value gets updated. So follow update shouldn't trigger mlog:
update sales set TEMP='TEMP2' where cust_id=1
but this update should:
update sales set amount_sold=3 where cust_id=1
What I am seeing happenning is any update on the base table will triger mlog entried regardless whether the column is involed in the materialized view or not.
Can someone please confirm if this is the correct behavior and whether there is a way to accomplish what I wanted to do?
Thank you!
Edited by: user3933488 on Jan 8, 2010 12:53 PMYou created the materialized view logs with some columns, which is not necessary when creating a join MV on top of them. You can happily skip those in your MV log definition. And then it becomes clear that those columns are not involved in the decision whether a MV log needs to be updated or not. Everything that happens to the base table gets recorded. The "WITH ROWID" and "INCLUDING NEW VALUES" and the column list only specify WHAT should be recorded when something changed.
Regards,
Rob. -
Wonder if anyone can help me with a question?
I am new to data guard and only recently setup my first implementation of a primary and standby Oracle 11 g database.
It's all setup correctly, i.e no gaps sequences showing, no errors in the alert logs and I have successfully tested a switch over and switch back.
I wanted to re-test the archive logs were going across to the standby database ok, unfortunately I performed an alter system switch logfile on the standby database instead of primary.
No errors are reported anywhere, no archive log sequence gaps or errors in the alert logs, but I am wondering if this will cause a problem the next time I have to failover to the standby database?
Apologies for my lack of my knowledge I am new data guard and only been a DBA for a couple of years, have not had time to read up on the 500 page Data Guard book yet.
Thanks in AdvanceFirst you have to know what happens when log switch occurs either manually or internally.
All data & changes will be in online redo log files, once log switch occurred either automatic or forcefully, these information from online redo log files will be dumped to archives.
Now tell me. Where will be the online redo? There is no concept of online redo data on standby, in case of real time apply you will have only standby redo log files, even you cannot switch standby redo log files.
First this command on standby won't work, it's applicable only for online redo log files. So onions redo exists/active only in primary.
So nothing to worry on it. Make sure environment is sync prior to performing switchover.
Hope this helps.
Your all questions why unanswered? Close them and keep the forum clean. -
hi masters,
this seems to be very basic, but i would like to know internal of this process.
we all know that LGWR writes redo entries to online redo log files on disk. on commit SCN is generated and tagged to transaction. and LGWR writes this to online redo log files.
but my question is, how these redo entries comes to redo log buffer??? look all required data is fetched into buffer cache by server process. it is modified there, and committed. DBWR writes this to datafiles, but at what point, which process writes this committed transaction (i think redo entry) into log buffer cache????
does LGWR do this?? what internally happens exactly???
if you can plz focus some light on internals, i will be thankfull....
thanks and regards
VDHi Vikrant,
DBWR writes this to datafiles, but at what point, which process writes this committed transaction (i think redo entry) into log buffer cache????Remember that, Before DBWR Acts on flushing the dirty Blocks to Data files, Before this server process, makes sure that LGWR finishes the writing Redo Log Buffer to Online Redo Log files. Since as per ORACLE Architecture poting of Recovering data till point of time @ Crash is important and this is achieved by Online Redo Logs files.
Rest how the data is Updated in to the Redo Log Buffer Aman had stated the clear steps.
- Pavan Kumar N -
Hi,
So after JDK 1.6.0_34 (as far as I know), java introduced GC log rotation JVM args (see below)
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M
Prior to these JVM arg additions, on any java application start, your GC log would get wiped. In my case a tomcat server running java code.
So now I would assume when these JVM arg's are present, the GC log continues to rotate and not get wiped on application start up. However the above behavior still persists ever after JDK 1.6.0_34 with the new JVM args present.
I should mention while the java application is running, the GC log DOES rotate (i.e., gc_log.txt.0, becomes gc_log.txt.1), but when application is restarted the LAST GC log is erased completely.
This behavior isn't really "log rotation/backup", I've had to introduce my own shell/bat scripts to rotate the latest GC logs manually outside of these JVMargs. I'm not sure if I'm misunderstanding how to use the new JVM args or if oracle doesn't consider this a bug and this is normal behavior.
I did file a bug on this, 9005051. However there is no guarantee your bug request ever makes it to the public bug database and oracle doesn't provide reasoning back to the submitter when the bug is rejected or not looked at, so I don't really know what happened with my bug request.
If anyone has any insight into this, I would really appreciate it.
Thank you!> Or better yet, where I can find someone to actually answer my question?
Me! Me! I know the answer to that question...
Oracle is more than happy for you to pay them for a support\service contract of which part of that will be that you can ask them questions and they will answer them. Real Oracle engineers and perhaps the very people who created the code in the first place. Or at least the ones in charge of it now.
> yet there is no guarantee my bug is even a bug or will even get posted to the bug database
Yes that is true.
But I suspect, strongly, that if you have a big enough contract that every bug you post to the database, and that you tell oracle about, will appear promptly and be addressed promptly.
And if you have a smaller (but not really small) Oracle contract I am rather certain that you can obtain a custom fix.
Alternatively you can download the source, figure out the problem, and fix it yourself. Then apply it to your running services. -
IDSM-2 ip-log default question?
Ok so, I have recently enabled log-pair-packets on signatures without changing the ip-log defaults, which from all documentation and posts in this forum reads that it defaults to 0 packets, 30 minutes, 0 bytes. But when I look at iplog-status at the CLI, I see that all ip logging on these event-triggered logs are
any length bytes/packets
30 second (not minutes!) exactly
my question is if the documentation is wrong? Also the documentation says after any 1 condition is met then it will stop logging, but if bytes = 0 and packets = 0, wouldn't that mean it wouldnt log at all? Or does that mean it does not check that parameter.
I can always do a test scenario myself, but I wanted to ask the community first if they have also found that the documentation is wrong in saying 30 minutes and it being really 30 seconds. Thanks in advance!
rayThe "10" in "vlan access-map IDSMAP 10" is not an identifier like a VLAN number, it's an statement ordering number (like in line numbered programming). that "10" just means it's the first statement. You need to associate IDSMAP with the VLAN:
VLAN 666
name IDSMAP
Then put that VLAN number into your vlan-list:
vlan filter IDSMAP vlan-list 666
vlan internal allocation policy ascending
vlan access-log ratelimit 2000
I've had problems when limiting the "capture allowed-vlan" to just your interesting VLANs, but if your seeing some traffic from each VALN this doesn't seem to be a likely fix for your problem.
intrusion-detection module 1 data-port 1 capture allowed-vlan 1-4094
The other thing to test is to make your standard CAPTUREALL ACL and extended ACL:
ip access-list standard CAPTUREALL
permit ip any any
Once you get this working, then you can worry about sending duplicate packets to your IDSM for inter-VRF traffic. The IDSM will ignore them, but if you're running a lot of traffic, it may create more load on the sensor than necessary. -
Hi All,
i am running on ebsr12 12.1.1 db 11.1.0.7 on OUL5x64
i have 2 questions would like your help.
when applying a patch got this error and when using adadmin to re-generate the form still got this error
The following Oracle Forms objects did not generate successfully:
gl forms/ZHS GLXJEENT.fmx
where is the location of the logfile so i can see more detail about the error.
i was in $APPL_TOP/admin/TEST/log and $APPL_TOP/admin/TEST/out
but it did not have much detail about the error.
found this note from Hussein
Please make sure you have no invalid objects in the database before trying to generate the form again via adadmin or manually.
If the forms fails to compile, please check the error log file for details -- How to Generate Form, Library and Menu for Oracle Applications [ID 130686.1]
to re-generate form exe from command line.
frmcmp_batch.sh module=/TEST/testappl/au/12.0/forms/US/ARXTWMAI.fmb
userid=APPS/APPS output_file=/TEST/testappl/ar/12.0/forms/US/ARXTWMAI.fmx
module_type=form compile_all=special
this form is for US , if i want to generate form for ZHS ,Should i just change the location point to ZHS like this
frmcmp_batch.sh module=/TEST/testappl/au/12.0/forms/ZHS/ARXTWMAI.fmb
userid=APPS/APPS output_file=/TEST/testappl/ar/12.0/forms/ZHS/ARXTWMAI.fmx
module_type=form compile_all=special
Thanks for your help.
RegardsThe following Oracle Forms objects did not generate successfully: gl forms/ZHS GLXJEENT.fmxadpatch log will tell you worker which was processing this fmb.....mark that worker number.....in same directory find ad*worker_num*.log and see that log file.
You can find adworker,adpatch,adworker log at:-
Application Tier-- adpatch log - $APPL_TOP/admin/<SID>/log/
Thanks,
JD -
In the past I have ingested my AVCHD .mts files direct from the SD cards via "Log & transfer" with no problems. Unfortunately a colleague has left the country I'm in, with the SD cards and left me with direct copies on a hard disc.
When I try and use Log & transfer to ingest the same .mts files from a disc it doesn't recognise the files. Is this normal?Did they copy the FULL CARD structure? Leave nothing out? Keep in EXACTLY the way it appeared on the card? In other words...made a folder, copied everyting on that card into that folder? If so, then in Log and Transfer, just select that folder.
If not...boo on them. Hisssss! Then you will need third party options like Toast Titanium or Clipwrap2. I recommend Clipwrap...I use it and really like it.
Shane -
I can not log into my account. I have reset the password to no avail. On welcome screen when I try to log in I get "error 400". If I try in the organizer or editor it locks up and then nothing happens. Can't even shut down program. Anyone know what is going on, I have PSE 10?
You don't need to login to use your own licensed version of PSE10. All you need to do is to get this on the screen:
From the above picture, you can see that you have two buttons: Organizer and Editor. Click either of it and you are in.
If you are trying to login to Photoshop.com then forget it because it no longer exist.
Hope this helps. -
Rman archive logs deltion question..
Hi All,
What's the best way to delete archive logs that have been backed up by RMAN? Is there a preferred way? We are looking at the 2 scripts below:
run {
allocate channel c1 type disk ;
sql 'alter system archive log current';
crosscheck archivelog all;
backup AS COMPRESSED BACKUPSET incremental level 1 tag LEVEL1_Wed filesperset 1 database include current controlfile *ARCHIVELOG ALL*;
sql 'ALTER DATABASE BACKUP CONTROLFILE TO TRACE';
crosscheck backup of database;
crosscheck backup of controlfile;
delete noprompt force obsolete;
delete noprompt force expired archivelog all;
*delete archivelog until time 'sysdate-3';*
delete noprompt force expired backup of database;
delete noprompt force expired backup of controlfile;
release channel c1;
}The above script will back up all archive logs and then it will delete anything that is more than 3 days old. Or, would it better to use the following format:
run {
allocate channel c1 type disk ;
sql 'alter system archive log current';
crosscheck archivelog all;
backup AS COMPRESSED BACKUPSET incremental level 1 tag LEVEL1_Wed filesperset 1 database include current controlfile *ARCHIVELOG until time 'sysdate-3' DELETE INPUT*;
sql 'ALTER DATABASE BACKUP CONTROLFILE TO TRACE';
crosscheck backup of database;
crosscheck backup of controlfile;
delete noprompt force obsolete;
delete noprompt force expired archivelog all;
delete noprompt force expired backup of database;
delete noprompt force expired backup of controlfile;
release channel c1;
}Can someone please shed some light on this.
P.S. Sorry can't get the required text to show in bold to highlight the differences in the 2 scripts. Please look for the text between the **.
Thanks and Regards
Edited by: rsar001 on Nov 25, 2010 7:45 AM
Edited by: rsar001 on Nov 25, 2010 7:46 AMSo the best way would be to setup the archive retention as mentioned above:
>
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
>
Also change the code to the following:
run {
allocate channel c1 type disk ;
sql 'alter system archive log current';
crosscheck archivelog all;
backup AS COMPRESSED BACKUPSET incremental level 1 tag LEVEL1_Wed filesperset 1 database include current controlfile ARCHIVELOG ALL;
sql 'ALTER DATABASE BACKUP CONTROLFILE TO TRACE';
crosscheck backup of database;
crosscheck backup of controlfile;
delete noprompt force obsolete;
delete noprompt force expired archivelog all;
delete noprompt force expired backup of database;
delete noprompt force expired backup of controlfile;
release channel c1;
}Would the above suffice? or do we still need to make further adjustments?
Thanks in advance for all your help.
Thanks
Edited by: rsar001 on Nov 25, 2010 10:26 AM -
Hi
In JDeveloper, we could write our application logging to logs for debugging and viewable from JDeveloper.
When we deploy to Standalone weblogic server, we cant find our logging.
Of the 3 loggers (Msg Catalog, nonCatalog, Commons log API),
Which logging mechanism is the best practice used logging for development environment ? Does it print to AdminServer.log ? If not, how can we accomplish that printing app log to AdminServer.log ?
Which logging mechanism is the best practice used logging for production environment ? Does it print to AdminServer.log ? If not, how can we accomplish that printing app log to AdminServer.log ?
ThanksHi
In JDeveloper, we could write our application logging to logs for debugging and viewable from JDeveloper.
When we deploy to Standalone weblogic server, we cant find our logging.
Of the 3 loggers (Msg Catalog, nonCatalog, Commons log API),
Which logging mechanism is the best practice used logging for development environment ? Does it print to AdminServer.log ? If not, how can we accomplish that printing app log to AdminServer.log ?
Which logging mechanism is the best practice used logging for production environment ? Does it print to AdminServer.log ? If not, how can we accomplish that printing app log to AdminServer.log ?
Thanks -
Hi,
I need to set my Apache logs to combined for AWStats to work. In 10.6 server admin there seems to be no place to change the log format. Can any one tell me how I do this? I have multiple sites hosted so will I need to do it for each site?
Thanks
PaulYou'll need to change it directly in the config file for each site. They're in /etc/apache2/sites, with names starting with a site ID number and ending in .conf. You need to change the CustomLog directive to something like
CustomLog "/var/apache2/access_log" combined
...and then restart Apache ("sudo apachectl graceful" will do the trick). -
Exchange 2013 backup and log truncation question
I have a scenario where I have 5 Exchange mailbox servers as members of a DAG, everything running fine, log truncation is working, but seems to be working differently than I am expecting it to work, as in previous versions of Exchange. I am used to the
logs being truncated immediately after the full backup completes. What is happening is that the server is retaining a weeks worth of logs. Everything older than 7 days gets truncated after a full backup completes. Is this expected behavior? I am running Exchange
2013 CU3 on Server 2012, backup software is HP Data Protector 8 running the latest updates. I am running a VSS backup. The DAG is not a lagged copy of the DB's either.I will check the HP Data Protector setting, but I do not think that setting is valid or anything configurable.
Here is the contents of some various commands for one of my databases, all of which are configured identically, tell me if you see something set incorrectly. Hopefully I have redacted enough internal stuff:
Get-MailboxDatabaseCopyStatus
RunspaceId : 8467f253-8c82-4206-a002-14779e701b0e
Identity : DB1\EX2013-MB1
Id : DB1\EX2013-MB1
Name : DB1\EX2013-MB1
DatabaseName : DB1
Status : Mounted
InstanceStartTime : 12/10/2013 7:02:44 PM
LastStatusTransitionTime :
MailboxServer : EX2013-MB1
ActiveDatabaseCopy : EX2013-MB1
ActiveCopy : True
ActivationPreference : 1
StatusRetrievedTime : 1/7/2014 12:54:19 PM
WorkerProcessId : 10292
ActivationSuspended : False
ActionInitiator : Unknown
ErrorMessage :
ErrorEventId :
ExtendedErrorInfo :
SuspendComment :
RequiredLogsPresent :
SinglePageRestore : 0
ContentIndexState : Healthy
ContentIndexErrorMessage :
ContentIndexVersion : 1
ContentIndexBacklog : 0
ContentIndexRetryQueueSize : 0
ContentIndexMailboxesToCrawl :
ContentIndexSeedingPercent :
ContentIndexSeedingSource :
CopyQueueLength : 0
ReplayQueueLength : 0
ReplaySuspended : False
ResumeBlocked : False
ReseedBlocked : False
MinimumSupportedDatabaseSchemaVersion : 0.121
MaximumSupportedDatabaseSchemaVersion : 0.126
RequestedDatabaseSchemaVersion :
LatestAvailableLogTime :
LastCopyNotificationedLogTime : 1/7/2014 12:38:09 PM
LastCopiedLogTime :
LastInspectedLogTime :
LastReplayedLogTime :
LastLogGenerated : 3510
LastLogCopyNotified : 3510
LastLogCopied : 0
LastLogInspected : 0
LastLogReplayed : 0
LowestLogPresent : 2875
LastLogInfoIsStale : False
LastLogInfoFromCopierTime : 1/7/2014 12:54:19 PM
LastLogInfoFromClusterTime : 1/7/2014 12:54:16 PM
LastLogInfoFromClusterGen : 3510
LogsReplayedSinceInstanceStart : 0
LogsCopiedSinceInstanceStart : 0
LatestFullBackupTime : 12/9/2013 10:31:32 PM
LatestIncrementalBackupTime :
LatestDifferentialBackupTime :
LatestCopyBackupTime :
SnapshotBackup : True
SnapshotLatestFullBackup : True
SnapshotLatestIncrementalBackup :
SnapshotLatestDifferentialBackup :
SnapshotLatestCopyBackup :
LogReplayQueueIncreasing : False
LogCopyQueueIncreasing : False
ReplayLagStatus : Enabled:False; PlayDownReason:None; Percentage:0; Configured:00:00:00; Actual:00:00:00
DatabaseSeedStatus :
OutstandingDumpsterRequests : {}
OutgoingConnections : {}
IncomingLogCopyingNetwork :
SeedingNetwork :
DiskFreeSpacePercent : 99
DiskFreeSpace : 4.365 TB (4,798,945,292,288 bytes)
DiskTotalSpace : 4.366 TB (4,800,702,836,736 bytes)
ExchangeVolumeMountPoint :
DatabaseVolumeMountPoint : C:\Disks\DB1\
DatabaseVolumeName : \\?\Volume{b95c7aa2-2cd9-430e-ab75-ea4b0efba659}\
DatabasePathIsOnMountedFolder : True
LogVolumeMountPoint : C:\Disks\DB1\
LogVolumeName : \\?\Volume{b95c7aa2-2cd9-430e-ab75-ea4b0efba659}\
LogPathIsOnMountedFolder : True
LastDatabaseVolumeName :
LastDatabaseVolumeNameTransitionTime :
VolumeInfoError :
IsValid : True
ObjectState : Unchanged
[PS] C:\Windows\system32>Get-MailboxDatabase -Identity db1 |fl
RunspaceId : 8467f253-8c82-4206-a002-14779e701b0e
JournalRecipient :
MailboxRetention : 30.00:00:00
OfflineAddressBook : \Default Offline Address Book
OriginalDatabase :
PublicFolderDatabase : CMBS1CC\CMBS1PF\CMBS1PF
ProhibitSendReceiveQuota : Unlimited
ProhibitSendQuota : 2 GB (2,147,483,648 bytes)
RecoverableItemsQuota : 30 GB (32,212,254,720 bytes)
RecoverableItemsWarningQuota : 20 GB (21,474,836,480 bytes)
CalendarLoggingQuota : 6 GB (6,442,450,944 bytes)
IndexEnabled : True
IsExcludedFromProvisioning : False
IsExcludedFromInitialProvisioning : False
IsSuspendedFromProvisioning : False
IsExcludedFromProvisioningBySpaceMonitoring : False
DumpsterStatistics :
DumpsterServersNotAvailable :
ReplicationType : Remote
AdminDisplayVersion : Version 15.0 (Build 775.38)
AdministrativeGroup : Exchange Administrative Group (FYDIBOHF23SPDLT)
AllowFileRestore : False
BackgroundDatabaseMaintenance : True
ReplayBackgroundDatabaseMaintenance :
BackgroundDatabaseMaintenanceSerialization :
BackgroundDatabaseMaintenanceDelay :
ReplayBackgroundDatabaseMaintenanceDelay :
MimimumBackgroundDatabaseMaintenanceInterval :
MaximumBackgroundDatabaseMaintenanceInterval :
BackupInProgress :
DatabaseCreated : True
Description :
EdbFilePath : C:\Disks\DB1\DB\DB1.edb
ExchangeLegacyDN : /o=%%%%%/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=EX2013-MB1/cn=Microsoft
Private MDB
DatabaseCopies : {DB1\EX2013-MB1, DB1\EX2013-MB2, DB1\EX2013-MB3, DB1\EX2013-MB1-DR,
DB1\EX2013-MB2-DR}
InvalidDatabaseCopies : {}
AllDatabaseCopies : {DB1\EX2013-MB1, DB1\EX2013-MB2, DB1\EX2013-MB3, DB1\EX2013-MB1-DR, DB1\EX2013-MB2-DR}
Servers : {EX2013-MB1, EX2013-MB2,
EX2013-MB3, EX2013-MB1-DR, EX2013-MB2-DR}
ActivationPreference : {[EX2013-MB1, 1], [EX2013-MB2, 2], [EX2013-MB3, 3], [EX2013-MB1-DR, 4], [EX2013-MB2-DR, 5]}
ReplayLagTimes : {[EX2013-MB1, 00:00:00], [EX2013-MB2, 00:00:00], [EX2013-MB3,
00:00:00], [EX2013-MB1-DR, 00:00:00], [EX2013-MB2-DR, 00:00:00]}
TruncationLagTimes : {[EX2013-MB1, 00:00:00], [EX2013-MB2, 00:00:00], [EX2013-MB3, 00:00:00], [EX2013-MB1-DR,
00:00:00], [EX2013-MB2-DR, 00:00:00]}
RpcClientAccessServer : EX2013-MB1.%%%%%%.local
MountedOnServer :
DeletedItemRetention : 60.00:00:00
SnapshotLastFullBackup :
SnapshotLastIncrementalBackup :
SnapshotLastDifferentialBackup :
SnapshotLastCopyBackup :
LastFullBackup :
LastIncrementalBackup :
LastDifferentialBackup :
LastCopyBackup :
DatabaseSize :
AvailableNewMailboxSpace :
MaintenanceSchedule : {Sun.1:00 AM-Sun.5:00 AM, Mon.1:00 AM-Mon.5:00 AM, Tue.1:00 AM-Tue.5:00 AM, Wed.1:00
AM-Wed.5:00 AM, Thu.1:00 AM-Thu.5:00 AM, Fri.1:00 AM-Fri.5:00 AM, Sat.1:00 AM-Sat.5:00 AM}
MountAtStartup : True
Mounted :
Organization : %%%%%%%
QuotaNotificationSchedule : {Sun.1:00 AM-Sun.1:15 AM, Mon.1:00 AM-Mon.1:15 AM, Tue.1:00 AM-Tue.1:15 AM, Wed.1:00 AM-Wed.1:15 AM, Thu.1:00 AM-Thu.1:15
AM, Fri.1:00 AM-Fri.1:15 AM, Sat.1:00 AM-Sat.1:15 AM}
Recovery : False
RetainDeletedItemsUntilBackup : False
Server : EX2013-MB1
MasterServerOrAvailabilityGroup : DAG1
WorkerProcessId :
CurrentSchemaVersion :
RequestedSchemaVersion :
AutoDagExcludeFromMonitoring : False
AutoDatabaseMountDial : GoodAvailability
DatabaseGroup :
MasterType : DatabaseAvailabilityGroup
ServerName : EX2013-MB1
IssueWarningQuota : 1.9 GB (2,040,110,080 bytes)
EventHistoryRetentionPeriod : 7.00:00:00
Name : DB1
LogFolderPath : C:\Disks\DB1\LOGS
TemporaryDataFolderPath :
CircularLoggingEnabled : False
LogFilePrefix : E01
LogFileSize : 1024
LogBuffers :
MaximumOpenTables :
MaximumTemporaryTables :
MaximumCursors :
MaximumSessions :
MaximumVersionStorePages :
PreferredVersionStorePages :
DatabaseExtensionSize :
LogCheckpointDepth :
ReplayCheckpointDepth :
CachedClosedTables :
CachePriority :
ReplayCachePriority :
MaximumPreReadPages :
MaximumReplayPreReadPages :
DataMoveReplicationConstraint : SecondCopy
IsMailboxDatabase : True
IsPublicFolderDatabase : False
MailboxProvisioningAttributes :
AdminDisplayName : DB1
ExchangeVersion : 0.10 (14.0.100.0)
DistinguishedName : CN=DB1,CN=Databases,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative
Groups,CN=%%%%%,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=%%%%%%,DC=%%%
Identity : DB1
Guid : 765d01d0-4468-43fd-a1e8-ad205b25c8ee
ObjectCategory : %%%%%%.%%%/Configuration/Schema/ms-Exch-Private-MDB
ObjectClass : {top, msExchMDB, msExchPrivateMDB}
WhenChanged : 12/10/2013 7:02:44 PM
WhenCreated : 11/13/2013 11:00:58 AM
WhenChangedUTC : 12/11/2013 12:02:44 AM
WhenCreatedUTC : 11/13/2013 4:00:58 PM
OrganizationId :
OriginatingServer : %%%%%-DC4.%%%%%.%%%
IsValid : True
ObjectState : Changed
[PS] C:\Windows\system32>Get-MailboxDatabase -Status | ft Name,Server,LastF*Backup,LastI*Backup,LastD*Backup -AutoSize
Name Server LastFullBackup LastIncrementalBackup LastDifferentialBackup
MB1-DB0 EX2013-MB1 1/6/2014 11:30:46 PM
DB1 EX2013-MB1 1/6/2014 11:30:46 PM
DB2 EX2013-MB1 1/6/2014 11:30:46 PM
MB3-DB0 EX2013-MB3 1/6/2014 10:45:44 PM
DB5 EX2013-MB3 1/6/2014 10:45:44 PM
DB6 EX2013-MB3 1/6/2014 10:45:44 PM
MB2-DB0 EX2013-MB2 1/6/2014 10:15:39 PM
DB3 EX2013-MB2 1/6/2014 10:15:39 PM
DB4 EX2013-MB2 1/6/2014 10:15:39 PM
MB1-DR-DB0 EX2013-MB1-DR 1/6/2014 8:00:23 PM
MB2-DR-DB0 EX2013-MB2-DR 1/6/2014 8:15:22 PM -
Log & Transfer Question - Import Settings
Is there any way to save my import settings in Log & Transfer (FCP7)? Every clip I import goes back to the default of 4 audio channels...and I don't want all four channels. Aside from manually changing the audio setting for each clip in the batch, is there a way to save a my audio setting for the entire batch of clips?
Thanks.while you may not be able to save a setting, you can apply a change to all clips in the L&T bin.
In the bin, highlight the clips you want to have the same settings then, in the viewer window in Log & Transfer, select the settings tab, make the changes to the audio then apply to all.
x
Maybe you are looking for
-
Generating target nodes dynamically in message mapping
Hi XI GURUS I am trying to generate target node using using more then source node. Is it possible to do this. I need this as in source I have 2 different nodes (0 to unbounded) and in target I want to create corresponding number of target nodes. For
-
CS5.5 vs. CS5 PROBLEM with publishing Air for Android
Hello, My Air app publishes fine to my 2.2 and 2.3 Android Phones from Adobe Flash Professional CS5 - and the app runs fine. Debugger is clean. When publishing from CS5.5 however, it puts the app on the phone, but the screen turns black and the app q
-
Off loading photos from photo gallery on iPad 2
I bought my kids iPad 2s for Xmas and loaded them with hundreds of old family photos from my iPhoto, but now, they are having problems transferring the photos to their PCs. Is there an app for that?
-
Ok, I have found defining classes in .as files can be very useful for me. I have a few functions that are being called over and over again in different movies. There is what I want / need to do in order to stream line the creating and updating of the
-
Wacom Bamboo stopped working after OS 10.9.4 update
Hello. I wondered if any one else with a Wacom Bamboo has had this problem? After i installed the OS X10.9.4 update today on my Imac, the tablet stopped working properly. I updated the tablet drivers, then re-installed it but still it doesn't seem to