JDom Recursion Bugging me!!
Developers,
5 dukes for the correct anwer. I asked my manager and he couldnt figure this out so here goes. A simple recursion problem - or so you might think.
JDom only allows one level deep serach for a tag name - no problem - so i wrote a recursive method which looks for a tag no matter how many levels deep. Here is the code
public void getParentNode(Element element) {
if(element.getName().equals("content")){
System.out.println("RETURNED");
return parentElement ;
List content = element.getContent();
Iterator iterator = content.iterator();
while (iterator.hasNext()) {
Object o = iterator.next();
if (o instanceof Element) {
Element child = (Element) o;
return (getParentNode(child));
return null;
}I watched this in the debugger and it kept hitting the retrun null and also retrun the 'parentElement'. The tag is DEFINITEL in the xml file.
I have no idea what to do - I am thinking or ripping out all the JDom references and using Xceres DOM - but this will be a pain.
What have i done wrong?
Thanks.
Instaed of iterating over the nodes select the node with an xpath expression.
import org.jdom.xpath.*;
File xmlDocument;
SAXBuilder saxBuilder=new SAXBuilder("org.apache.xerces.parsers.SAXParser");
org.jdom.Document jdomDocument=saxBuilder.build(xmlDocument);
org.jdom.Attribute contentNode=(XPath.selectSingleNode(jdomDocument,"//content");
Similar Messages
-
Eager Fetching recursivity bug
I have two classes A and B.
A has two relations to B.
A.theB1 (A 1-1 to B)
A.theB2 (A 1-n to B)
B has two relation to A.
B.theA1 (B 1-n to A)
B.theA2 (B 1-n to A)
If I configure A.theB1 and A.theB2 with default-fetch-group="true".
And then I Use Eager Fetching multiple:everything works fine.
But if I also configure B.theA1 and B.theA2 with
default-fetch-group="true".
And then I Use Eager Fetching multiple
I get this Exception:
java.lang.StackOverflowError
at java.util.TreeMap.access$400(TreeMap.java:77)
at java.util.TreeMap$EntryIterator.nextEntry(TreeMap.java:1024)
at java.util.TreeMap$KeyIterator.next(TreeMap.java:1047)
at java.util.TreeMap.buildFromSorted(TreeMap.java:1588)
at java.util.TreeMap.buildFromSorted(TreeMap.java:1576)
at java.util.TreeMap.buildFromSorted(TreeMap.java:1534)
at java.util.TreeMap.addAllForTreeSet(TreeMap.java:1492)
at java.util.TreeSet.addAll(TreeSet.java:247)
at kodo.jdbc.sql.AbstractSelect$JoinSet.<init>(AbstractSelect.java:1021)
at kodo.jdbc.sql.AbstractSelect.and(AbstractSelect.java:719)
at kodo.jdbc.sql.AbstractSelect.getJoins(AbstractSelect.java:541)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:229)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:221)
at
kodo.jdbc.runtime.JDBCStoreManager.selectBaseMappings(JDBCStoreManager.java:874)
at
kodo.jdbc.runtime.JDBCStoreManager.selectMappings(JDBCStoreManager.java:835)
at kodo.jdbc.runtime.JDBCStoreManager.access$000(JDBCStoreManager.java:30)
at
kodo.jdbc.runtime.JDBCStoreManager$SelectImpl.selectInternal(JDBCStoreManager.java:1071)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:259)
at
kodo.jdbc.meta.OneToOneFieldMapping.select(OneToOneFieldMapping.java:452)
at
kodo.jdbc.runtime.JDBCStoreManager.selectBaseMappings(JDBCStoreManager.java:909)
at
kodo.jdbc.runtime.JDBCStoreManager.selectMappings(JDBCStoreManager.java:835)
at kodo.jdbc.runtime.JDBCStoreManager.access$000(JDBCStoreManager.java:30)
at
kodo.jdbc.runtime.JDBCStoreManager$SelectImpl.selectInternal(JDBCStoreManager.java:1071)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:259)
at
kodo.jdbc.meta.OneToOneFieldMapping.select(OneToOneFieldMapping.java:452)
at
kodo.jdbc.runtime.JDBCStoreManager.selectBaseMappings(JDBCStoreManager.java:909)
at
kodo.jdbc.runtime.JDBCStoreManager.selectMappings(JDBCStoreManager.java:835)
at kodo.jdbc.runtime.JDBCStoreManager.access$000(JDBCStoreManager.java:30)
at
kodo.jdbc.runtime.JDBCStoreManager$SelectImpl.selectInternal(JDBCStoreManager.java:1071)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:259)
at
kodo.jdbc.meta.ManyToManyFieldMapping.load(ManyToManyFieldMapping.java:350)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:427)
at
kodo.runtime.DelegatingStoreManager.load(DelegatingStoreManager.java:141)
at
kodo.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:386)
at
kodo.runtime.DelegatingStoreManager.load(DelegatingStoreManager.java:141)
at kodo.runtime.ROPStoreManager.load(ROPStoreManager.java:82)
at kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2274)
at kodo.runtime.StateManagerImpl.load(StateManagerImpl.java:139)
at
kodo.runtime.PersistenceManagerImpl.load(PersistenceManagerImpl.java:2408)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1244)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1175)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:677)
at kodo.jdbc.meta.OneToOneFieldMapping.load(OneToOneFieldMapping.java:481)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:731)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:720)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:720)
at
kodo.jdbc.runtime.JDBCStoreManager.initialize(JDBCStoreManager.java:338)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at
kodo.datacache.DataCacheStoreManager.initialize(DataCacheStoreManager.java:340)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at kodo.runtime.ROPStoreManager.initialize(ROPStoreManager.java:57)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1240)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1175)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:677)
at kodo.jdbc.meta.OneToOneFieldMapping.load(OneToOneFieldMapping.java:481)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:731)
at
kodo.jdbc.runtime.JDBCStoreManager.initialize(JDBCStoreManager.java:338)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at
kodo.datacache.DataCacheStoreManager.initialize(DataCacheStoreManager.java:340)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at kodo.runtime.ROPStoreManager.initialize(ROPStoreManager.java:57)
I have test every combination and what I see ,is that the Exception comes
because "Eager Fetching" process works recursively.
I remmember the same problem with Top Link Batch Reading.
I don't know if this could help: but I see two solutions.
1) explicity option to work recursively or not.
2) limitate depth level of recursivity.
This is the configuration file:
javax.jdo.PersistenceManagerFactoryClass:
kodo.jdbc.runtime.JDBCPersistenceManagerFactory
javax.jdo.option.ConnectionDriverName: com.jnetdirect.jsql.JSQLDriver
javax.jdo.option.ConnectionUserName: sa
javax.jdo.option.ConnectionPassword:
javax.jdo.option.ConnectionURL:
jdbc:JSQLConnect://agarcia/database=kodo3/SelectMethod=cursor
javax.jdo.option.Optimistic: true
javax.jdo.option.RetainValues: true
javax.jdo.option.NontransactionalRead: false
kodo.ConnectionFactoryProperties: MaxCachedStatements=0
kodo.DataCacheClass: kodo.datacache.CacheImpl
kodo.DataCacheProperties: CacheSize=1000000
kodo.PersistenceManagerClass: kodo.runtime.PersistenceManagerImpl
kodo.QueryCacheProperties: CacheSize=10000
kodo.RemoteCommitProviderClass: kodo.event.SingleJVMRemoteCommitProvider
kodo.jdbc.DBDictionaryClass=kodo.jdbc.sql.SQLServerDictionary
kodo.jdbc.TransactionIsolation=
javax.jdo.option.Multithreaded: true
## Transaction ###
kodo.TransactionMode: managed
kodo.ManagedRuntimeClass: kodo.ee.JNDIManagedRuntime
kodo.ManagedRuntimeProperties:
TransactionManagerName=java:/TransactionManager
#kodo.jdbc.MappingFactoryClass: kodo.jdbc.meta.MetaDataMappingFactory
kodo.jdbc.DBDictionaryProperties: BatchLimit=0 JoinSyntax=sql92
kodo.EagerFetchMode: multiple
kodo.FetchBatchSize: 10000
kodo.FetchThreshold: -1Oh boy. Oops.
Thanks for the report.
-Patrick
On Fri, 01 Aug 2003 16:57:52 +0000, oscar wrote:
>
I have two classes A and B.
A has two relations to B.
A.theB1 (A 1-1 to B)
A.theB2 (A 1-n to B)
B has two relation to A.
B.theA1 (B 1-n to A)
B.theA2 (B 1-n to A)
If I configure A.theB1 and A.theB2 with default-fetch-group="true".
And then I Use Eager Fetching multiple:everything works fine.
But if I also configure B.theA1 and B.theA2 with
default-fetch-group="true".
And then I Use Eager Fetching multiple
I get this Exception:
java.lang.StackOverflowError
at java.util.TreeMap.access$400(TreeMap.java:77)
at java.util.TreeMap$EntryIterator.nextEntry(TreeMap.java:1024)
at java.util.TreeMap$KeyIterator.next(TreeMap.java:1047)
at java.util.TreeMap.buildFromSorted(TreeMap.java:1588)
at java.util.TreeMap.buildFromSorted(TreeMap.java:1576)
at java.util.TreeMap.buildFromSorted(TreeMap.java:1534)
at java.util.TreeMap.addAllForTreeSet(TreeMap.java:1492)
at java.util.TreeSet.addAll(TreeSet.java:247)
at kodo.jdbc.sql.AbstractSelect$JoinSet.<init>(AbstractSelect.java:1021)
at kodo.jdbc.sql.AbstractSelect.and(AbstractSelect.java:719)
at kodo.jdbc.sql.AbstractSelect.getJoins(AbstractSelect.java:541)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:229)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:221)
at
kodo.jdbc.runtime.JDBCStoreManager.selectBaseMappings(JDBCStoreManager.java:874)
at
kodo.jdbc.runtime.JDBCStoreManager.selectMappings(JDBCStoreManager.java:835)
at kodo.jdbc.runtime.JDBCStoreManager.access$000(JDBCStoreManager.java:30)
at
kodo.jdbc.runtime.JDBCStoreManager$SelectImpl.selectInternal(JDBCStoreManager.java:1071)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:259)
at
kodo.jdbc.meta.OneToOneFieldMapping.select(OneToOneFieldMapping.java:452)
at
kodo.jdbc.runtime.JDBCStoreManager.selectBaseMappings(JDBCStoreManager.java:909)
at
kodo.jdbc.runtime.JDBCStoreManager.selectMappings(JDBCStoreManager.java:835)
at kodo.jdbc.runtime.JDBCStoreManager.access$000(JDBCStoreManager.java:30)
at
kodo.jdbc.runtime.JDBCStoreManager$SelectImpl.selectInternal(JDBCStoreManager.java:1071)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:259)
at
kodo.jdbc.meta.OneToOneFieldMapping.select(OneToOneFieldMapping.java:452)
at
kodo.jdbc.runtime.JDBCStoreManager.selectBaseMappings(JDBCStoreManager.java:909)
at
kodo.jdbc.runtime.JDBCStoreManager.selectMappings(JDBCStoreManager.java:835)
at kodo.jdbc.runtime.JDBCStoreManager.access$000(JDBCStoreManager.java:30)
at
kodo.jdbc.runtime.JDBCStoreManager$SelectImpl.selectInternal(JDBCStoreManager.java:1071)
at kodo.jdbc.sql.AbstractSelect.select(AbstractSelect.java:259)
at
kodo.jdbc.meta.ManyToManyFieldMapping.load(ManyToManyFieldMapping.java:350)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:427)
at
kodo.runtime.DelegatingStoreManager.load(DelegatingStoreManager.java:141)
at
kodo.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:386)
at
kodo.runtime.DelegatingStoreManager.load(DelegatingStoreManager.java:141)
at kodo.runtime.ROPStoreManager.load(ROPStoreManager.java:82)
at kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2274)
at kodo.runtime.StateManagerImpl.load(StateManagerImpl.java:139)
at
kodo.runtime.PersistenceManagerImpl.load(PersistenceManagerImpl.java:2408)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1244)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1175)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:677)
at kodo.jdbc.meta.OneToOneFieldMapping.load(OneToOneFieldMapping.java:481)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:731)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:720)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:720)
at
kodo.jdbc.runtime.JDBCStoreManager.initialize(JDBCStoreManager.java:338)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at
kodo.datacache.DataCacheStoreManager.initialize(DataCacheStoreManager.java:340)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at kodo.runtime.ROPStoreManager.initialize(ROPStoreManager.java:57)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1240)
at
kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1175)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:677)
at kodo.jdbc.meta.OneToOneFieldMapping.load(OneToOneFieldMapping.java:481)
at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:731)
at
kodo.jdbc.runtime.JDBCStoreManager.initialize(JDBCStoreManager.java:338)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at
kodo.datacache.DataCacheStoreManager.initialize(DataCacheStoreManager.java:340)
at
kodo.runtime.DelegatingStoreManager.initialize(DelegatingStoreManager.java:134)
at kodo.runtime.ROPStoreManager.initialize(ROPStoreManager.java:57)
I have test every combination and what I see ,is that the Exception comes
because "Eager Fetching" process works recursively.
I remmember the same problem with Top Link Batch Reading.
I don't know if this could help: but I see two solutions.
1) explicity option to work recursively or not.
2) limitate depth level of recursivity.
This is the configuration file:
javax.jdo.PersistenceManagerFactoryClass:
kodo.jdbc.runtime.JDBCPersistenceManagerFactory
javax.jdo.option.ConnectionDriverName: com.jnetdirect.jsql.JSQLDriver
javax.jdo.option.ConnectionUserName: sa
javax.jdo.option.ConnectionPassword:
javax.jdo.option.ConnectionURL:
jdbc:JSQLConnect://agarcia/database=kodo3/SelectMethod=cursor
javax.jdo.option.Optimistic: true
javax.jdo.option.RetainValues: true
javax.jdo.option.NontransactionalRead: false
kodo.ConnectionFactoryProperties: MaxCachedStatements=0
kodo.DataCacheClass: kodo.datacache.CacheImpl
kodo.DataCacheProperties: CacheSize=1000000
kodo.PersistenceManagerClass: kodo.runtime.PersistenceManagerImpl
kodo.QueryCacheProperties: CacheSize=10000
kodo.RemoteCommitProviderClass: kodo.event.SingleJVMRemoteCommitProvider
kodo.jdbc.DBDictionaryClass=kodo.jdbc.sql.SQLServerDictionary
kodo.jdbc.TransactionIsolation=
javax.jdo.option.Multithreaded: true
## Transaction ###
kodo.TransactionMode: managed
kodo.ManagedRuntimeClass: kodo.ee.JNDIManagedRuntime
kodo.ManagedRuntimeProperties:
TransactionManagerName=java:/TransactionManager
#kodo.jdbc.MappingFactoryClass: kodo.jdbc.meta.MetaDataMappingFactory
kodo.jdbc.DBDictionaryProperties: BatchLimit=0 JoinSyntax=sql92
kodo.EagerFetchMode: multiple
kodo.FetchBatchSize: 10000
kodo.FetchThreshold: -1--
Patrick Linskey
SolarMetric Inc. -
Running out of connections...
Hello,
I think I'm having the same problem ..
Did you solve it ? How ?
Thanks a lot.
Franco
Francesco Costa
BEA Systems Italia
[email protected]
Phone +39 02 3327 3000
Mobile + 39 335 1246931
Fax +39 02 3327 3001
"Gerhard Henning" <[email protected]> ha scritto nel messaggio
news:[email protected]...
>
Hi,
we seem to have a similar problem. Can you please tell me,
which service pack solves the problem.
We are using SP8 but the problem seems to be still there!
Thanks
Gerhard
Joseph Weinstein <[email protected]> wrote:
Gene Chuang wrote:
Hi Joe,
Good news; bug ISOLATED and FIXED! First off, I was able to determinethis bug only manifests if
we are running clustered apps. under a single app server, this bugdoes not appear, and under
clustered app this bug always appears.
Furthermore, I appologize if I may have lead you on a wild goosechasewith all the subsequent
patches you sent me. As it turns out, when I received each patch Ihanded it off to our tester, who
applied the patch and probably only bounced a node in our server
cluster
before testing, hence
getting negative results. Today I applied your latest patch,
clean-bounced
all the nodes, and the
bug disappeared! Thanks Joe!
Now the 2 questions of the day:
1) Is your latest patch production worthy? Can I apply it on ourlive site?
Yes.
2) If not, when's the next service pack coming out with this fix?It will be in future service packs.
Joe
Gene
"Joseph Weinstein" <[email protected]> wrote in message
news:[email protected]...
>>>>
>>>>
Gene Chuang wrote:
Hi Joe,
We tried it and found the same exception raised, except for this
time it took a lot longer to
bug
out; the server hung for almost 2 minutes before dumping the
following.
However I was just
informed by another developer that we may have a recursive bugon our side, and that we are
doing a
non-terminating db search. Perhaps this is causing this
out-of-resource
allocation bug, i.e.
our
bug exposed your bug! I'm sure once we fix our recursion bug we
won't see this again, but maybe
something should also be done on your side that'll handle thismore gracefully...
I'll let you know what happens once we fix our bug; thanks forthe help!
DO let me know. In fact, please use the attached jar (in place ofall the others:-)
because the line ResourceAllocator.java:561 is not executable code.There may be a
JVM problem... I'm fairly sure there's no way our code can causean NPE in this
area...
Joe
javax.ejb.FinderException: Couldn't build CallableStatement from
JDBCCommandImpl:
java.sql.SQLException: Exception raised by connection pool
:java.lang.NullPointerException
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:56
1)
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:55
5)
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:50
2)
atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:172
atweblogic.jdbc.common.internal.ConnectionPool.reserveWaitSecs(ConnectionPool.
java:145)
atweblogic.jdbcbase.jts.Connection.openConnectionIfNecessary(Connection.java:5
87)
atweblogic.jdbcbase.jts.Connection.prepareCall(Connection.java:159)
atweblogic.jdbc20.rmi.internal.ConnectionImpl.prepareCall(ConnectionImpl.java:
89)
atweblogic.jdbc20.rmi.SerialConnection.prepareCall(SerialConnection.java:71)
atcom.kiko.db.JDBCConnection.getCallableStatement(JDBCConnection.java:158)
>>>>>
Gene Chuang
Join Kiko.com!
"Joseph Weinstein" <[email protected]> wrote in message
news:[email protected]...
>>>>>>
>>>>>>
Gene Chuang wrote:
Hi Joe,
We were finally able to consistently replicate the
ResourceAllocator.reserve()
stacktrace,
although
from a slightly different route. I tried your jar, and now
have a slightly different
exception:
Hi Gene. Thank you for your patience. Here's one last jar file.
Ditch the previous one. I appreciate your energy and willingness
to help. Let me know,
Joe
javax.ejb.EJBException: Failed to select row 0: Exception
raised
by connection pool
:java.lang.NullPointerException
at
weblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:56
1)
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:55
5)
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:50
2)
atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:172
atweblogic.jdbc.common.internal.ConnectionPool.reserveWaitSecs(ConnectionPool.
java:145)
atweblogic.jdbcbase.jts.Connection.openConnectionIfNecessary(Connection.java:5
87)
atweblogic.jdbcbase.jts.Connection.prepareStatement(Connection.java:135)
atweblogic.jdbc20.rmi.internal.ConnectionImpl.prepareStatement(ConnectionImpl.
java:80)
atweblogic.jdbc20.rmi.SerialConnection.prepareStatement(SerialConnection.java:
55)
atcom.kiko.db.JDBCConnection.getPreparedStatement(JDBCConnection.java:141)
>>>>>>>
Gene Chuang
Join Kiko.com!
"Joseph Weinstein" <[email protected]> wrote in message
news:[email protected]...
Hi Gene.
Would you put the attached jar to the head of your
weblogic.classpath,
and let me know if this fixes the problem? thanks,
Joe
Gene Chuang wrote:
WL 5.1 sp 6, running on Solaris 2.7 over Oracle 8i
Gene Chuang
Join Kiko.com!
"Joseph Weinstein" <[email protected]> wrote in message
news:[email protected]...
Gene Chuang wrote:
Hi Joe,
Now I'm getting this slightly different stack trace...
do u think it's the same
"out of
connection"
error, or something else?It's something else. What version of the product is this?
Joe
Couldn't build PreparedStatement from JDBCCommandImpl:
java.sql.SQLException:
Exception
raised
by
connection pool :java.lang.NullPointerException
at
weblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:56
1)
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:55
5)
atweblogic.common.internal.ResourceAllocator.reserve(ResourceAllocator.java:50
2)
atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:172
atweblogic.jdbc.common.internal.ConnectionPool.reserveWaitSecs(ConnectionPool.
java:145)
atweblogic.jdbcbase.jts.Connection.openConnectionIfNecessary(Connection.java:5
87)
atweblogic.jdbcbase.jts.Connection.prepareStatement(Connection.java:135)
atweblogic.jdbc20.rmi.internal.ConnectionImpl.prepareStatement(ConnectionImpl.
java:80)
atweblogic.jdbc20.rmi.SerialConnection.prepareStatement(SerialConnection.java:
55)
>>>>>>>>>>>
Gene Chuang
Join Kiko.com!
"Joseph Weinstein" <[email protected]> wrote in message
news:[email protected]...
Hi.
The server won't let an EJB wait indefinitely for
a connection to become
available in a pool. All the while it waits, it iscommandeering one
of the fixed number of execute-threads, doing nothingwhile the thread
could possibly run another task which already hasit's pool connection,
and could release it if it ran. The default waitallowed is 5 seconds
the server decides to devote it's resources to thework it can do, and
not let overflow requests cause an unintentionalor malicious denial-
of-service attack. The fundamental solution is tosupply enough DBMS
connections to serve the load you have.
Joe
Gene Chuang wrote:
Hi,
Here's our pool init info:
weblogic.jdbc.connectionPool.oraclePool=\
driver=oracle.jdbc.driver.OracleDriver,\
url=jdbc:oracle:oci8:@xxx,\
loginDelaySecs=0,\
initialCapacity=4,\
maxCapacity=10,\
capacityIncrement=2,\
allowShrinking=true,\
shrinkPeriodMins=15,\
refreshMinutes=10,\
testTable=dual,\
props=user=xxx;password=xxxx;server=xxx
I run a db-intensive data migration program on
my ejbs (hence JDBC connection
requests
are
made
on
the serverside). After a while, I run out of
connections,
and the exception
pasted
below is
thrown.
Why is this exception thrown? If the pool is at
maxCapacity and all connections
are
taken,
shouldn't weblogic place the connection-requestor
on wait until a connection is
available?
And
I'm
pretty sure I return the connections because I
wrap connection.close() on a
finally
block...
JDBCPoolConnection.connect: Problem connecting
Exception:
java.sql.SQLException: Pool connect failed: Noneavailable
java.sql.SQLException: Pool connect failed: Noneavailable
at
weblogic.jdbcbase.pool.Driver.connect(Driver.java:184)
atweblogic.jdbcbase.jts.Driver.connect(Driver.java:233)
at
weblogic.jdbc20.common.internal.RmiDataSource.getConnection(RmiDataSourc
e.java:55)
at
weblogic.jdbc20.common.internal.RmiDataSource_ServiceStub.getConnection(
RmiDataSource_ServiceStub.java:179)
at
com.kiko.db.JDBCPoolConnection.connect(JDBCPoolConnection.java:24)ABMEB.
ejbActivate: Getting bean out of pooled state.
Gene Chuang
Join Kiko.com!--
PS: Folks: BEA WebLogic is in S.F. with both entryand advanced positions for
people who want to work with Java and E-Commerceinfrastructure products. Send
resumes to [email protected]
The Weblogic Application Serverfrom BEA
JavaWorld Editor's Choice Award: Best WebApplication Server
Java Developer's Journal Editor's Choice Award:Best Web Application Server
Crossroads A-List Award: Rapid Application
Development
Tools for Java
Intelligent Enterprise RealWare: Best ApplicationUsing a Component Architecture
>
http://www.bea.com/press/awards_weblogic.html
>>>>>>>>>>
PS: Folks: BEA WebLogic is in S.F. with both entry andadvanced positions for
people who want to work with Java and E-Commerce
infrastructure
products. Send
resumes to [email protected]
The Weblogic Application Server fromBEA
JavaWorld Editor's Choice Award: Best Web
Application
Server
Java Developer's Journal Editor's Choice Award: BestWeb Application Server
Crossroads A-List Award: Rapid Application
Development
Tools for Java
Intelligent Enterprise RealWare: Best Application Usinga Component Architecture
>
http://www.bea.com/press/awards_weblogic.html
>>>>>>>>
PS: Folks: BEA WebLogic is in S.F. with both entry andadvanced
positions for
people who want to work with Java and E-Commerce
infrastructure
products. Send
resumes to [email protected]
The Weblogic Application Server fromBEA
JavaWorld Editor's Choice Award: Best Web
Application
Server
Java Developer's Journal Editor's Choice Award: Best WebApplication Server
Crossroads A-List Award: Rapid Application DevelopmentTools for Java
Intelligent Enterprise RealWare: Best Application Using aComponent Architecture
>
http://www.bea.com/press/awards_weblogic.html
>>>>>>
PS: Folks: BEA WebLogic is in S.F. with both entry and advancedpositions for
people who want to work with Java, XML, SOAP and E-Commerce
infrastructure
products. Send resumes to [email protected]
The Weblogic Application Server from BEA
JavaWorld Editor's Choice Award: Best Web ApplicationServer
Java Developer's Journal Editor's Choice Award: Best Web
Application
Server
Crossroads A-List Award: Rapid Application Development Toolsfor Java
Intelligent Enterprise RealWare: Best Application Using a
Component
Architecture
http://www.bea.com/press/awards_weblogic.html--
PS: Folks: BEA WebLogic is in S.F. with both entry and advanced
positions
for
people who want to work with Java, XML, SOAP and E-Commerce
infrastructure
products. Send resumes to [email protected]
The Weblogic Application Server from BEA
JavaWorld Editor's Choice Award: Best Web Application Server
Java Developer's Journal Editor's Choice Award: Best WebApplication
Server
Crossroads A-List Award: Rapid Application Development Toolsfor Java
Intelligent Enterprise RealWare: Best Application Using a ComponentArchitecture
http://www.bea.com/press/awards_weblogic.html--
PS: Folks: BEA WebLogic is in S.F. with both entry and advanced positions
for
people who want to work with Java, XML, SOAP and E-Commerceinfrastructure
products. Send resumes to [email protected]
The Weblogic Application Server from BEA
JavaWorld Editor's Choice Award: Best Web Application Server
Java Developer's Journal Editor's Choice Award: Best Web Application
Server
Crossroads A-List Award: Rapid Application Development Tools for
Java
Intelligent Enterprise RealWare: Best Application Using a ComponentArchitecture
http://www.bea.com/press/awards_weblogic.html -
Recursive Sub-query Factoring... bug? or my own personal ignorance?
I was trying to create a list of all dates in the current month using a recursive sub-query factor. (Oracle 11.2)
I know how to do this by other methods; the question is, what's going on with this "WITH" example?
Here's my attempt at generating the list:
with
date_list ( crnt_mnth_date ) as
( select cast ( trunc(sysdate,'MM') as date ) from dual
UNION ALL
select crnt_mnth_date + interval '1' day
from date_list
where to_char(crnt_mnth_date,'MM') = to_char(sysdate,'MM')
select * from date_list ;The recursion should add one day to the previous date and continue while the generated dates are in the same month
as the current-date (sysdate). All I get is the first day of the month. In an attempt to figure this out I added the
sequence "n". To my surprise the recursion is subtracting instead of adding the one day each time.
with
date_list ( n, crnt_mnth_date ) as
( select 1, cast ( trunc(sysdate,'MM') as date ) from dual
UNION ALL
select n+1, crnt_mnth_date + (interval '1' day) as crnt_mnth_date
from date_list
where n < 9
select * from date_list ;And... Changing the "+" to a "-" still subtracts one day each time -- no change.
Obvious BUG? Or am I just confused on how this works?Hi,
I've heard that the new recursive WITH clause feature is buggy, especially when DATEs are involved.
I get the same (unexpected) results that you do.
A very similar query, using only NUMBERs, does wotk as expected:
with
date_list ( n, crnt_mnth_num ) as
( select 1, 20010501 from dual
UNION ALL
select n+1, crnt_mnth_num + 1 as crnt_mnth_num
from date_list
where n < 9
select * from date_list ;Outptu:
` N CRNT_MNTH_NUM
1 20010501
2 20010502
3 20010503
4 20010504
5 20010505
6 20010506
7 20010507
8 20010508
9 20010509 -
Mscorlib recursive resource lookup bug
Hi,
I got crash in my application when restart of the Windows 8.
Please find the assertion failure details below.
Expression: [mscorlib recursive resource lookup bug] Description: Infinite recursion during resource lookup within mscorlib. This may be a bug in mscorlib, or potentially in certain extensibility points
such as assembly resolve events or CultureInfo names. Resource name: Word_At
[Expanded Information] Stack Trace: at System.Environment.ResourceHelper.GetResourceStringCode(Object userDataIn) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode
code, CleanupCode backoutCode, Object userData) at System.Environment.ResourceHelper.GetResourceString(String key, CultureInfo culture) at System.Environment.ResourceHelper.GetResourceString(String key) at
System.Environment.GetResourceStringLocal(String key) at System.Environment.GetResourceFromDefault(String key) at System.Diagnostics.StackTrace.ToString(TraceFormat traceFormat) at System.Diagnostics.StackTrace.ToString()
Event Viewer Showing Error Message: .NET Runtime version : 4.0.30319.17929 - Assert FailureExpression: [mscorlib recursive resource
lookup bug] Description: Infinite recursion during resource lookup within mscorlib. This may be a bug in mscorlib, or potentially in certain extensibility points such as assembly resolve events or CultureInfo names. Resource name: Arg_AccessViolationException.
What could be the reason and any fix is available for this
issue in windows 8?
Thanks....Thanks gaurav for reply ,
We Verified code , we are not doing any recursive call trace.
This is the piece of code while executing it on windows 8 giving issue , issue seen rarely , 2 times we encounter the issue.
private static bool ReadConfigFile()
bool err = false;
_logWriterParams = new LogWriterParams(); // default config
if (File.Exists(_logWriterConfigFileName))
try
TextReader reader = new StreamReader(_logWriterConfigFileName);
XmlSerializer x = new XmlSerializer(typeof(LogWriterParams)); <<-- assert trace pointing this for the error.
Assert Failure.
Expression: [mscorlib recursive resource lookup bug]
Description: Infinite recursion during resource lookup within mscorlib. This may be a bug in mscorlib, or potentially in certain extensibility points such as assembly resolve events or CultureInfo
names. Resource name: Word_At
[Expanded Information] Stack Trace: at System.Environment.ResourceHelper.GetResourceStringCode(Object userDataIn) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode
code, CleanupCode backoutCode, Object userData) at System.Environment.ResourceHelper.GetResourceString(String key, CultureInfo culture) at System.Environment.ResourceHelper.GetResourceString(String key) at
System.Environment.GetResourceStringLocal(String key) at System.Environment.GetResourceFromDefault(String key) at System.Diagnostics.StackTrace.ToString(TraceFormat traceFormat) at System.Diagnostics.StackTrace.ToString()
Please excuse for late response.
Thank you,
Jeevan -
Possible bug while implementing recursion
Hey,
Either I am going complete nuts and not seing the most obvious mistake or there is really a bug, hope someone can help. I have a herirachical xml document that I would like to flatten out. I wrote a cfm code and it works perfectly, but when I put them logic in cfscript it does not work. From what I can tell, when recursion returns back, and continues on, the old value in the loop is overwritten (counter value is not what it should be when the state was saved and recursion took place). I don't know how else to explain this. Here is code in cfm and in cfscript, can you see something I am not?
function flattenXML works just fine, but flattenXML2 does not.
[code]
<cfset xmlfile = "/xDocs/TAXONOMY_XMLDOC_131.xml" />
<cfset myDoc = xmlParse(xmlfile) />
<cfset theRootElement = myDoc.XmlRoot>
<cfdump var="#theRootElement.XMLChildren[1]#"/>
<cfset st = flatternXML2(theRootElement, "", structNew())/>
<cfdump var="#st#"/>
<cffunction name="flatternXML" access="private" returntype="struct">
<cfargument name="node" type="xml" required="true">
<cfargument name="str" type="string" required="true">
<cfargument name="lineage" type="struct" required="true" >
<cfset t_name="NONE"/>
<cfif structKeyExists(arguments.node.XmlAttributes, "NAME")>
<cfset t_name = arguments.node.XmlAttributes['NAME']/>
</cfif>
<cfif len(arguments.str)>
<cfset arguments.str &= "; " & left(arguments.node.XmlName, 1) & "_" & t_name/>
<cfelse>
<cfset arguments.str = left(arguments.node.XmlName, 1) & "_" & t_name/>
</cfif>
<cfif ArrayLen(arguments.node.XmlChildren) eq 0>
<!---<cfoutput>#arguments.str#</cfoutput></br>--->
<cfset hash_key = hash(arguments.str, "MD5")/>
<cfif not structKeyExists(arguments.lineage, hash_key)>
<cfset structInsert(arguments.lineage, hash_key, arguments.str)/>
<cfelse>
<cfoutput>duplicate lineage #arguments.str#<br/></cfoutput>
</cfif>
<cfreturn arguments.lineage/>
</cfif>
<cfloop from="1" to="#arraylen(arguments.node.XmlChildren)#" index="i">
<cfset arguments.lineage = flatternXML(arguments.node.XmlChildren[i], arguments.str, arguments.lineage)/>
</cfloop>
<cfreturn arguments.lineage/>
</cffunction>
<cffunction name="flatternXML2" access="private" returntype="struct">
<cfargument name="node" type="xml" required="true">
<cfargument name="str" type="string" required="true">
<cfargument name="lineage" type="struct" required="true" >
<cfscript>
//set name and prefix, of the current node
t_name = "NONE";
if (structKeyExists(arguments.node.XmlAttributes, "NAME"))
t_name = arguments.node.XmlAttributes['NAME'];
if (len(arguments.str))
arguments.str &= "; " & left(arguments.node.XmlName, 1) & "_" & t_name;
else
arguments.str = left(arguments.node.XmlName, 1) & "_" & t_name;
//recursion end condition
if (arraylen(arguments.node.XmlChildren) eq 0) {
writeoutput(arguments.str & "</br>");
hash_key = hash(arguments.str, "MD5");
if (not structKeyExists(arguments.lineage, hash_key))
structInsert(arguments.lineage, hash_key, arguments.str);
else
writeoutput("duplicate lineage: " & arguments.str & "<br/>");
return(arguments.lineage);
for(j=1; j lte arraylen(arguments.node.XmlChildren); j=j+1){
writeoutput("before " & j & "_" & arraylen(arguments.node.XmlChildren) & "<br/>");
arguments.lineage = flatternXML2(arguments.node.XmlChildren[j], arguments.str, arguments.lineage);
writeoutput("after " & j & "_" & arraylen(arguments.node.XmlChildren) & "<br/>");
return(arguments.lineage);
</cfscript>
</cffunction>
[/code]To help you with another pair of eyes, I will just translate the tag version into a script. You can then compare. Here it is:
<cfscript>
var t_name="NONE";
var output="";
var hash_key="";
var updatedStr="";
var updatedLineage=structNew();
var returnStruct=structNew();
updatedStr = arguments.str;
updatedLineage = arguments.lineage;
if (structKeyExists(arguments.node.XmlAttributes, "NAME")) {
t_name = arguments.node.XmlAttributes['NAME'];
if (len(updatedStr)) {
updatedStr &= "; " & left(arguments.node.XmlName, 1) & "_" & t_name;
else
updatedStr = left(arguments.node.XmlName, 1) & "_" & t_name;
if (ArrayLen(arguments.node.XmlChildren) eq 0) {
//output=output & updatedStr & "<br/>";
hash_key = hash(updatedStr, "MD5");
if (not structKeyExists(updatedLineage, hash_key)) {
structInsert(updatedLineage, hash_key, updatedStr);
else
//output=output & "duplicate lineage: " & updatedStr & "<br/>";
//returnStruct.output=output;
//returnStruct.updatedLineage=updatedLineage;
for (i=1; i LTE arraylen(arguments.node.XmlChildren); i=i+1) {
updatedLineage = flatternXML(arguments.node.XmlChildren[i], updatedStr, updatedLineage);
//updatedLineage = flatternXML(arguments.node.XmlChildren[i], updatedStr, updatedLineage).updatedLineage;
return updatedLineage;
// return returnStruct;
</cfscript>
It is good practice to leave variables in the arguments scope unmodified during the processing of a function. Modifying them increases complexity. This comes into play when you maintain the code later, as we are now doing here. The solution is obvious. Just define a new updatable variable. In this case, updatedStr and updatedLineage.
You will also notice I have commented out all display statements. It is bad practice to make a function do more than one thing at once. The function returns a variable. It should therefore not have the added burden of writing output. If you wish to return output plus some other result(s), then store them in a struct and return that. -
Apparent kompare bug(s)
I apologise in advance for the length of this 1st-Arch-post-in-years. I value context more than brevity, I guess. Hopefully this is the longest post I'll throw at Arch for a long, long time.
I looked around (quite) a bit before filing this post. I've used Arch off-n-on since 0.4, know the Arch site, the basics about Arch, and most of the "gotcha's" farly well - but I'm no sysop, either.
I have a project I've been needing and meaning to work on for years (and which continues to compound itself while I don't); a huge set of backups on CD/DVD of all my data (my *life*, really) for well over a decade - letters, graphics, email, the first 40 pages of my book, you name it. Over the years this 'little' library has grown to over 10 GB compressed (30,000 files, *most* of them ARCHIVES). My guess is that 2-4 GB (compressed) of the data is important, some very much so.
My archive collection consists of many recursed archives themselves, and they're in about 8 or 10 different formats, to boot (even *.ice, LMAO). Many are further embedded several levels into the main archives (yes, I know that's a bad idea, but I didn't know that when I created them...). I BADLY need to unpack all this stuff, find and toss the duplicates, and truly organise (and efficiently backup) the remainder. This effort will obviously require a lot of automation (or else my heirs will either do it, or not).
I'd found a couple of useful tools in the Windows world years ago, but I don't buy software and the key piece of my suite was shareware that disabled itself after 30 days - not nearly enough time to plow through this boatload of bits.
A few months back I built a system with an Athlon XP 3500+ on an Asus A8N-E mobo w/512MB and 120 GB and 250 GB HD's. I'm hoping to go the RAID route soon - Linux software RAID, in fact. The 250 GB HD is running off an add-in PCI ATA card; both HD's have a UDMA channel all to themselves. The machine's got 4 extra fans, a nicely oversized 485W P/S, and the on-board sensors consistently show it running cool. I'm quite sure this isn't a H/W problem (I'm having no others; I built it myself...). This machine has been absolutely rock-solid in every other regard. I LOVE the system; now if I can simply use it for this one basically administrative project.
I finally have both the speed and the space required to tackle this now-monster project. When I "came home" to Arch a few months ago, I wanted to try several WMs/DMs; GNOME, KDE, E17, fluxbox and ratpoison (at least) are all loaded on my system, and I've booted most at least once. I'd assumed I'd have a LOT of options as to the tools to use.
I've been man-paging and googling and pacman -Q'ing (followed by more googling) for a couple of weeks through the course of my testing thus far, while trying various wrappers in various WMs. My concluson: Houston, we have a problem! <S>Maybe</S> Several.
In the past I've been partial to KDE (that's quickly changing), and it's what I started with this time around. As such, kompare seemed the obvious route to go, and had MOST (NOT all) of the functionality I needed (I also very much like it's GUI). Of course, kompare is simply a wrapper for diff (several diff variants, in fact, potentially). It's possible that diff is the problem. Hopefully further testing (and/or comments here) will make that determination more clearly.
Subsequent googling has revealed that the diff version shipped with Arch likely hasn't been completely functional at all levels in almost *5 YEARS* now. Diff's codefile timestamp is Sept. 3, *2002*! Version 2.8.1 (3.4 is current - just like *kompare*'s current version is 3.4). Additionally, directly below diff in that listing is diff3. Diff's codefile size is about 60K; diff3's is about 20K. *However*, they both have the same timestamp AND version # !!!
A *completely cosmetic* bug (http://bugs.kde.org/show_bug.cgi?id=138347) is fairly clearly the same bug as (https://bugs.launchpad.net/ubuntu/+source/kdesdk/+bug/45466 - 3-1/2 *YEARS* earlier!), and seemingly a trivial patch for a competent maintainer to create, at that. I've verified (w/o even trying) that this bug is resident in my system as we speak. Of course it is - the version of diff Arch is currently shipping was compiled almost a YEAR before the FIRST of the 2 bug reports above were filed! My Arch desktop O/S was installed less than a month ago using the 0.7.2 base ISO (kompare's timestamp is Jan. 20, 2007; version 3.4 - and it's a wrapper for a piece of code last compiled in 2002? What, the 2.0 kernel?). I've aggressively maintained my system current since my very recent install. See how well kompare/diff work on YOUR system...
Indeed, from the kompare bug page, it's hard to believe it's EVER been stable - or even used (see http://bugs.kde.org/simple_search.cgi?i … &offset=20 and just keep on clicking "View More Results" - or get yourself an energiser bunny to do it for you, maybe).
I copied the backups from CD's/DVD's to a clean, freshly-showered 64 GB (62.8, actually) ext3 partition (4K blocksize, defaults otherwise, journal included). I set up a top-level directory in this partition with 15 directories beneath it, one for each of my backup discs (whose contents range in size from 50 MB to 4.36 GB). I copied one archive to each directory, then unpacked them one at a time from the command line, removing the archive copy as I went. I ended up with about 15 GB of active data (25%).
I first configured kompare to use diff (w/no C/L options) running under KDE. As kdiff3 is kompare's preference, however, why is kdiff3 not already in the repos?
I'd not used it before, so I wasn't familiar with kompare's UI. I started out working with a few smaller subsets of the data I knew were near-perfect copies of each other and that wouldn't overload the filesystem. It seemed to act strangely; while I expected it to be well-optimised, both HD activity and CPU utilisation seemed extremely low for extended periods (and note my setup doesn't allow for a HDD LED for the HD on the controller card, where ALL the disk activity is going on).
After 3 runs in a row w/o a single errant file being found, I made a 2-byte mod to a small, non-critical text file that was part of the archive I was currently komparing, rebooting to make DAMNED sure cache was cleared, and reran kompare; identical results. The change STILL wasn't detected (the changed file still showed as being an exact copy of the original). The output I was getting was quite weird: usually, the 'kompare' would run at what seemed a rational speed; after a while a response screen would then open, *completely* devoid of data or messages. Immediately after that an odd dialogue box popped up; the title bar said "Error!", but the *message* was, basically, "No Duplicates Encountered." (or some such), or, more commonly, the same one I encountered repeatedly while searching the internet: "Could not parse diff output." (presumably because kompare still wasn't passing the proper parameters to diff). The program crashed about 1/2 the time (actually, give it a big enough block of data, and it'll crash every time). I was stunned 4 days later when ONE run (an exact duplicate of one I'd already attempted several times) resulted in "proper" output - *PERFECT*, with a complete and correct analysis - and completely unreproduceable, even after rebooting. It's ("It's" = proper analysis/output has) happened 3 times since, twice while working with data sets I would term 'trivial'.
So, in 2 weeks I got 3 runs w/o errors. I assure you I was trying at least 2 runs most days; a 1/2-dozen some days. By then, I also knew that kompare under KDE wasn't my only option - and I knew a bit more.
I use the "usual" repositories. When I finally checked diff's compile date, my jaw dropped: Sept. 3, *2002* !!! It's version 2.8.1; the current stable release is 3.4, and 3.5 is currently available for D/L (albeit not released). V. 2.8.1 works with the 2.6 kernel about as well as the original GWBasic compiler ('interpreter') does in WinXP...and may have been this way for a long time.
More research (difficult, given the age of what little kompare documentation is available on the web). The primary maintainer apparently got removed from maintaining the package in 2003 (and, I presume, other packages, as well) for what appeared to be 'apathy' (MY words). He 'forgot' to include an update after saying he would do so (a patch to pass the appropriate parameters to diff, hopefully fixing the "Could not parse diff output." message, and maybe even straighten out my anomolies, as well).
Kompare hasn't been updated since. Diff3 IS currently being maintained. Actually, even though diff3 has apparently replaced diff, *kompare* prefers kdiff3. For that matter, from my reading thus far, diff, diff3, vimdiff and cmp ALL may possibly be useable (the latter would require some scripting). Further, ALL the following may possibly be made able to work as wrappers (in decreasing order of likelihood, IMNSHO): kompare, krusader, meld, and emelFM2.
So, diff is broken, and it doesn't appear that it's going to be fixed...I personally was a bit consternated when I realised just how off-the-rails kompare was. I see a function such as that provided by kompare as both *critical* AND as a 'black box' function - it HAS to work. Just like the ALU MUST compute 2+2 = 4, or the 'computer' becomes a boat anchor...
So, I D/L'ed meld (for GNOME; it runs under KDE but no help is avalable), then emelFM2 (written for E17, I believe). I've spent the past week trying various combinations of (a) kompare, krusader, meld and emelFM2 as wrappers for diff under (b) KDE, GNOME, and E17. The results have been SO inconsistent it's scary. This includes both giving apparently proper results with a big job on rare occasion (but not consistently reproduceable), and VERY bizarrely, on one occasion, wiping a brand new, 64 *GB* ext3 partition CLEAN. I assure you the remainder of my testing will be on a dedicated partition! This HD is almost braad-new and has given me ZERO other trouble.
Meld was the most consistently performing diff setup I used (on any WM; it worked on them all).
I plan to spend the next 1-2 weeks doing some detailed and structured testing of all possible combos of wrapper/driver/WM, rather than just diff. I also plan to see if it's possible to pass command-line parameters to diff from kompare (if I have the time to spend ANY more w/kompare; kompare clearly isn't what I need - even though it had the . Yes, I *do* know how many combos of 4 wrappers, 4 drivers, and 3 WM's there are (48); my college major was statistics, LOL. And 48 test runs is adequate to test each combo, ONCE...
To me, this task is like big sort jobs were on mainframes a (human, not computer!) generation ago: potentially very large, critical tasks, the results of which aren't always easy to manually double-check. Surely many, if not most, users have a manner of doing this. Diff has apparently been broken for maybe 5 years and THAT wouldn't have gone unnoticed - am I one of the VERY few to even try to use kompare? What do the rest of you use?
Note that after locating and reading as much kompare documentation as possible (I read more BUG reports than anything), I made several important discoveries:
(a) kompare isn't really set up for binary files; it's intended for use with text files. You can specify an option to force a text-mode search, but you apparently canNOT do the reverse.
(b) It err'ed out comparing two M$ 'Word' files, stating they "were binary"...
(c) Worst of all, for me: These files have been archived to CD or DVD, usually repeatedly, and then copied back to disk (I've lost 3 good-sized archives this way - which does serve to somewhat justify my apparently obsessive need to backup files). Kompare does do individual file comparisons (ie, if you specify both files on the command line) byte-by-byte (block-by-block, whatever), but if you request that it compare *directories*, filename and size are all it considers (it appears from the sparse documentation I've seen that this is corrected in the *current* version, . The duplicate-file tool suite I used in Windows did an MD5 hash on *every* file, no matter the size; I *knew* when a bit error had creeped in, and usually knew fairly quickly. Not finding out you have a bad backup until it's the only copy of your data you have left is no good!
Anyone have any suggestions on doing this task efficiently and reliably in Arch?
Again, I believe this to be an upstream problem in a way, but my guess is that if we were using the *current* diffutils to match the KDE version (and a maintainer had done his job in a timely fashion) this bug would NOT exist. My advice, at least until Arch can get diff3 (or, preferably, kdiff3) packaged, STAY AWAY FROM kompare !!! At least with anything critical, at least until this issue is better understood.
Kompare (and diffutils, as well, IMO) needs to be removed from the 'public' repo's until it's close to fixed, at least. Just pkgbuild'ing the current diffutils version (or kdiff3!) is likely the easiest way out. Who knows, MAYBE I'll take a stab at that myself. No promises.
For ANYONE still reading, WOW! TIA. Really. Sorry for the tome.
Blue Skies...g
"It's a lot easier to throw grenades than it is to catch them!" -- Tim Russert[intro, backups, bla...]I BADLY need to unpack all this stuff, find and toss the duplicates, and truly organise (and efficiently backup) the remainder.
Ok, that's what it's all about.
I'd found a couple of useful tools in the Windows world years ago, but I don't buy software and the key piece of my suite was shareware that disabled itself after 30 days - not nearly enough time to plow through this boatload of bits.
Total Commander rocks.
[killed half a dozen paragrahps or so] It's possible that diff is the problem.
Contratulations. You managed to post 3K of text but still failed to mention what the problem is.
Subsequent googling has revealed that the diff version shipped with Arch likely hasn't been completely functional at all levels in almost *5 YEARS* now. Diff's codefile timestamp is Sept. 3, *2002*! Version 2.8.1 (3.4 is current - just like *kompare*'s current version is 3.4).
http://www.archlinux.org/packages/search/?q=diffutils
You will notice that package isn't flagged out-of-date. That's because the (still current) upstream version is (tadaa!) 2.8.1. Please tell us where you found that 3.4 version.
Additionally, directly below diff in that listing is diff3. Diff's codefile size is about 60K; diff3's is about 20K. *However*, they both have the same timestamp AND version # !!!
WTF??? What listing? You know that diff and diff3 are different programs but still part of the same upstream sources, don't you?
[non-relevant nonsense...] See how well kompare/diff work on YOUR system...
They work fine.
[kompare bashing]
Take a look at the URL, bugs.kde.org, not bugs.archlinux.org. Got it?
I copied the backups from CD's/DVD's to a clean, freshly-showered 64 GB (62.8, actually) ext3 partition (4K blocksize, defaults otherwise, journal included). I set up a top-level directory in this partition with 15 directories beneath it, one for each of my backup discs (whose contents range in size from 50 MB to 4.36 GB). I copied one archive to each directory, then unpacked them one at a time from the command line, removing the archive copy as I went. I ended up with about 15 GB of active data (25%).
I think I could rest my case already at this point.
I first configured kompare to use diff (w/no C/L options) running under KDE. As kdiff3 is kompare's preference, however, why is kdiff3 not already in the repos?
It is: community/kdiff3 0.9.91-1
(I will skip some paragraphs about the actual kompare usage and come back to that at the very end.)
I use the "usual" repositories. When I finally checked diff's compile date, my jaw dropped: Sept. 3, *2002* !!! It's version 2.8.1; the current stable release is 3.4, and 3.5 is currently available for D/L (albeit not released). V. 2.8.1 works with the 2.6 kernel about as well as the original GWBasic compiler ('interpreter') does in WinXP...and may have been this way for a long time.
Please quit that bullshit and show us version 3.4 of the GNU diffutils.
Kompare hasn't been updated since. Diff3 IS currently being maintained. Actually, even though diff3 has apparently replaced diff, *kompare* prefers kdiff3. For that matter, from my reading thus far, diff, diff3, vimdiff and cmp ALL may possibly be useable (the latter would require some scripting). Further, ALL the following may possibly be made able to work as wrappers (in decreasing order of likelihood, IMNSHO): kompare, krusader, meld, and emelFM2.
Check http://websvn.kde.org/ to follow the development progress of kompare, it's part of kdesdk. diff3 is from diffutils 2.8.1 and I wouldn't call that "maintained", but you already know that. It is obviously not a replacement for diff, did you ever read the manpages? And wow, a KDE program prefers to use another KDE program!
[killed quite a bit]Diff has apparently been broken for maybe 5 years and THAT wouldn't have gone unnoticed - am I one of the VERY few to even try to use kompare? What do the rest of you use?
Again, diff is fine and I'm using kompare (or kdiff3?) almost daily, in krusader.
(a) kompare isn't really set up for binary files; it's intended for use with text files. You can specify an option to force a text-mode search, but you apparently canNOT do the reverse.
(b) It err'ed out comparing two M$ 'Word' files, stating they "were binary"...
Genius. Comparing binary files is useless. If you used MS Office you're hosed, that simple. More useful advice at the end.
Anyone have any suggestions on doing this task efficiently and reliably in Arch?
Yep.
Again, I believe this to be an upstream problem in a way, but my guess is that if we were using the *current* diffutils to match the KDE version (and a maintainer had done his job in a timely fashion) this bug would NOT exist. My advice, at least until Arch can get diff3 (or, preferably, kdiff3) packaged, STAY AWAY FROM kompare !!! At least with anything critical, at least until this issue is better understood.
Upstream, right... diffutils have nothing to do with KDE at all... Arch has diff3 and kdiff3 packaged ... it's no 'issue' at all.
Okay, I apologize for that 'tone' of mine but I get pissed easily when wasting so much time on something as this.
I know your problem very well as I'm constantly backing up stuff, comparing directories and hunting duplicates.
After unpacking everything the first step to take is to run a comprehensive check for binary duplicates. My tool of choice for that task is Total Commander (Win32 shareware, no timeout, just a minor nag screen on start, works fine with Wine). Just a few random keyboard shortcuts you might find handy (see the help file for explanations): ctrl-b, ctrl-f1(-f6), ctrl-c,y, alt-f7
After killing them, go on to compare ("synchronize") similar directories, either with Total Commander, Krusader, rsync, diff -R, whatever.
If you encounter similar named files, check the dates and kill the older ones, or compare them if they're not binary.
And so on... you know best what kind of data you got and how to split it up.
Throwing plain diff on a set of thousands of files, possibly gigabytes is a huge waste of time and computing/power resources. Adding a frontend will make it crash most of the time, especially with binary files. I think using kdiff3 to compare the Arch Linux 0.8 base beta1 iso to the beta2 one alone should suffice. A few more gigabytes of RAM might help though. -
(ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone reg
i use netbeans, i am trying to connect to the oracle server. my laptops time zone is wes central africa and wen i try to connect it give dis error (ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found ). it started with LOCALE NOT FOUND, then i got some tips online and now its a new error. thanks
Hi;
What is db version? You may hit bug. Please see:
Bug 5499393 - ORA-1882 from partition maintenance when territory is HUNGARY with TIMESTAMP partitioning [ID 5499393.8]
Regard
Helios -
Oracle 10g Enterprise Manager Console bug?
I think I have come across a bug in the Oracle 10g Enterprise Manager Console. If I go into my database, then into the schema, select your schema from the list, then tables. Now once your in your tables, find one that you know of and know fairly well, right click it, and select the show dependencies. A small window will open with just that table and its tablespace and user. Select user, database name, profile, then default. If you right click on the "default" and select "show dependencies" a blank blinking window will open and stay stuck in an infinite loop until you kill the application window. I think that this is a cumbersome tablespace issue. Any thoughts?
Hi Ugonic
Sorry both are same, it's spelling mistake it's " WCMMISLINK". Acutally, my database is getting updated but this alert log is getting generated in enterprise manager console of 10g.
The sql statement is
CREATE MATERIALIZED VIEW ADASNAP REFRESH FAST START WITH SYSDATE NEXT SYSDATE+1 AS
SELECT * FROM ADA@WCMMISLINK;
CREATE MATERIALIZED VIEW BGTABSNAP REFRESH FAST START WITH SYSDATE NEXT SYSDATE+1 AS
SELECT * FROM BGTAB@WCMMISLINK;
The above query is sheduled to run every day .
and i get the following Alert log in 10g enterprise manager console
Generic Alert log ORA-12012: error on auto execute of job 54
ORA-04052: error occured when looking up remote object WCMM.SYS@WCMMSERVERLINK
ORA-00604: error occured at recursive SQL level 3
ORA-12514: tns no listener
ORA-06512: AT "SYS.DBMS_SNAPSHOT" line 1883
can you guid me why the above error are occuring
Regards
Niranjan -
Sorry for long post... I have this scenario:
- Issue is on PS3, PS2 works fine
- I create a new object via "$myObj = New-Object PSObject"
- I append a number of PSMembers to it via something similar to:
"if (-not (Get-Member -InputObject $myObj -Name MyProperty))
Add-Member -InputObject $myObj -MemberType ScriptProperty -Name MyProperty -Value { return MyFunction }
I am seeing situations where what I believe is the call to "Get-Member" causes a huge number of invocations that appear to be calling the equivalent of the "MyFunction" in the above sample. I have debug statements in the MyFunction
function and nested functions and can see they are called by what appears to be the PS framework itself. Even running a "Get-PSCallstack" call from one of these functions seems to also trigger the debug statements to be hit (midway through the callstack
output??!) - bit confused by that but it's like PS is invoking the member in order to reflect it or something.
I used Trace-Command and could see that the "MemberResolution" trace source is repeatedly/infinitely called with the output:
MemberResolution Information: 0 : Matching "*"
MemberResolution Information: 0 : Generating the total list of members
MemberResolution Information: 0 : Type table members: 0.
MemberResolution Information: 0 : Adapted members: 0.
MemberResolution Information: 0 : 21 total matches.
This appears to be in an infinite loop. Therefore I added "-ListenerOption Callstack" to the Trace-Command call and captured the following and more (notice the call to PSObject.ToStringEmptyBaseObject()):
at System.Management.Automation.PSObject.AdapterGetMembersDelegate[T](PSObject msjObj)
at System.Management.Automation.PSMemberInfoIntegratingCollection`1.GetIntegratedMembers(MshMemberMatchOptions matchOptions)
at System.Management.Automation.PSMemberInfoIntegratingCollection`1.Match(String name, PSMemberTypes memberTypes, MshMemberMatchOptions matchOptions)
at System.Management.Automation.PSObject.ToStringEmptyBaseObject(ExecutionContext context, PSObject mshObj, String separator, String format, IFormatProvider formatProvider)
at System.Management.Automation.PSObject.ToString(ExecutionContext context, Object obj, String separator, String format, IFormatProvider formatProvider, Boolean recurse, Boolean unravelEnumeratorOnRecurse)
at System.Management.Automation.PSObject.ToString()
at System.String.Join(String separator, Object[] values)
at System.Management.Automation.ParameterBinderBase.BindParameter(CommandParameterInternal parameter, CompiledCommandParameter parameterMetadata, ParameterBindingFlags flags)
I then disassembled the relevant code in Reflector and could see that the ToString() method of PSObject has a check for "if obj.immediateBaseObjectIsEmpty" then to call the ToStringEmptyBaseObject() function. I could also see that the immediateBaseObjectIsEmpty
field is set in the CommonInitialization internal method if "obj is PSCustomObject" (which is the actual object type you get when you New-Object PSObject).
I then updated my code to create "$myObj = New-Object Object" (i.e. System.Object instead) and the issue goes away.
Can anyone explain what I may have done wrong here or is this looking like a bug in PowerShell?
Cheers!Hi Matt,
I'm not sure if we are on the same page? Getting the list of (i.e. metadata of) the members shouldn't invoke the member. I have used this technique in many places and it's never been an issue whereby you "wrap" a utility function with
a ScriptProperty or ScriptMethod to provide OO semantics on a PS object. The member isn't invoked by Get-Member, only when it's called explicitly.
Here's a small example:
function TestFunction
Write-Host "TestFunction called"
return "TestFunction return value"
$myObj = New-Object PSObject
Add-Member -InputObject $myObj -MemberType ScriptProperty -Name TestScriptProperty -Value `
Write-Host "TestScriptProperty called"
return TestFunction
Write-Host "Getting members"
Get-Member -InputObject $myObj -Name TestScriptProperty | Select *
Write-Host "Invoking property"
$myObj.TestScriptProperty
Here's the output (Get-Member output a bit mangled due to the text wrapping):
Getting members
TypeName Name
MemberType Definition
System.Management.Automation.PSCustomOb... TestScriptProperty
ScriptProperty System.Object TestScriptProperty {get=...
Invoking property
TestScriptProperty called
TestFunction called
TestFunction return value
Yup I see your point, but when you run your original code with this example object you can see there is no loop. There must either be a loop in the function or in the body of the script itself. This is very tough to troubleshoot without seeing the code you
are actually running.
You said you are adding several members to the custom object. Is this done in a loop? -
SPF IPv4 IPv6 Cross-Checking Bug - Producing PermError, Rejecting Mails
Messaging Server SPF Query with IPv6 and IPv4 cross checking not skipped, but rather producing a PermError. This would reject a lot of incoming mails which have both ipv4 and ipv6 spf records published.
I suspect that it is a bug.
Linux:
bin/imsimta version
Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)
libimta.so 7u4-27.01 64bit (built 08:50:46, Aug 30 2012)
Using /opt/sun/comms/messaging64/config/imta.cnf (compiled)
Linux mx1.mathewsystems.com 2.6.18-028stab101.1 #1 SMP Sun Jun 24 19:50:48 MSD 2012 x86_64 x86_64 x86_64 GNU/Linux
Solaris x86:
bin/imsimta version
Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit (built Aug 30 2012)
libimta.so 7u4-27.01 64bit (built 08:47:11, Aug 30 2012)
Using /opt/sun/comms/messaging64/config/imta.cnf (compiled)
SunOS COMM1 5.10 Generic_147441-01 i86pc i386 i86pc
./spfquery -i 2a00:1450:400c:c05::22e -v gmail.com
Running SPF query with:
IP address: 2a00:1450:400c:c05::22e
Domain: gmail.com
Sender: [email protected] (local-part: postmaster)
HELO Domain: gmail.com
01:09:35.41: ----------------------------------------------------------------
01:09:35.41: SPFcheck_host called:
01:09:35.41: source ip = 2a00:1450:400c:c05::22e
01:09:35.41: domain = gmail.com
01:09:35.41: sender = [email protected]
01:09:35.41: local_part = postmaster
01:09:35.41: helo_domain = gmail.com
01:09:35.41:
01:09:35.41: Looking up "v=spf1" records for gmail.com
01:09:35.42: DNS query status: Pass
01:09:35.42: "v=spf1 redirect=_spf.google.com"
01:09:35.42:
01:09:35.42: Parsing modifier: "redirect = _spf.google.com"
01:09:35.42: Processing macros in _spf.google.com
01:09:35.42: Returned: _spf.google.com
01:09:35.42:
01:09:35.42: No match found, following redirect to _spf.google.com
01:09:35.42: Recursing into SPFcheck_host
01:09:35.42: ----------------------------------------------------------------
01:09:35.42: SPFcheck_host called:
01:09:35.42: source ip = 2a00:1450:400c:c05::22e
01:09:35.42: domain = _spf.google.com
01:09:35.42: sender = [email protected]
01:09:35.42: local_part = postmaster
01:09:35.42: helo_domain = gmail.com
01:09:35.42:
01:09:35.42: Looking up "v=spf1" records for _spf.google.com
01:09:35.43: DNS query status: Pass
01:09:35.43: "v=spf1 include:_netblocks.google.com include:_netblocks2.google.com include:_netblocks3.google.com ?all"
01:09:35.43:
01:09:35.43: Parsing mechanism: " include : _netblocks.google.com"
01:09:35.43: Assuming a Pass prefix
01:09:35.43: Processing macros in _netblocks.google.com
01:09:35.43: Returned: _netblocks.google.com
01:09:35.43: Recursing into SPFcheck_host
01:09:35.43: ----------------------------------------------------------------
01:09:35.43: SPFcheck_host called:
01:09:35.43: source ip = 2a00:1450:400c:c05::22e
01:09:35.43: domain = _netblocks.google.com
01:09:35.43: sender = [email protected]
01:09:35.43: local_part = postmaster
01:09:35.43: helo_domain = gmail.com
01:09:35.44:
01:09:35.44: Looking up "v=spf1" records for _netblocks.google.com
01:09:35.44: DNS query status: Pass
01:09:35.44: "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all"
01:09:35.44:
*01:09:35.44: Parsing mechanism: " ip4 : 216.239.32.0/19"*
*01:09:35.44: Assuming a Pass prefix*
*01:09:35.44: +"ip4" not supported for IPv6; returning PermError+*
+01:09:35.44: Mechanism failed with PermError+
01:09:35.44:
01:09:35.44: SPFcheck_host is returning PermError
01:09:35.45: Returned from recursion with status: PermError
01:09:35.45: Returning with status: PermError
01:09:35.45: Mechanism failed with PermError
01:09:35.45:
01:09:35.45: SPFcheck_host is returning PermError
01:09:35.45: Returned from recursion with status: PermError
01:09:35.45:
01:09:35.45: SPFcheck_host is returning PermError
01:09:35.45: ----------------------------------------------------------------
Mathew
Edited by: 994254 on 15.03.2013 17:18 -
A potential bug on the new feature (12c) of Identity Column?
Hi,
I am testing a newly introduced feature of Identity Column in Oracle 12c. I used EclipseLink (JPA) to access the database.
I may have found a potential bug with this feature. I am getting "ORA-30667: cannot drop NOT NULL constraint on a DEFAULT ON NULL column" when I try to insert a row. My code doesn't explicitly drop "NOT NULL" constraint.
When I delete all the tables under the user and re-define the user and the tables, the error disappear.
Here are the details:
Error:
DatabaseException Internal Exception: java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-30667: cannot drop NOT NULL constraint on a DEFAULT ON NULL column Error Code: 604 Call: INSERT INTO MyTable (ID, SOMEID, SOMEDATE) VALUES (?, ?, ?) bind => [null, 100100147, 2013-11-29 Query: InsertObjectQuery(cus.entity.MyTable@1a9ea5b)
Table definition:
CREATE TABLE MyTable (id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY PRIMARY KEY,
someId INT NOT NULL,
someDate DATE NOT NULL,
PARTITION BY range(someDate)
interval(numtodsinterval(1,'year'))
SUBPARTITION BY HASH ( someId)
SUBPARTITIONS 20
PARTITION p0 VALUES LESS THAN (TO_DATE('01-12-2013', 'DD-MM-YYYY'))
Could anyone tell me if there might be a bug associated with the new feature or if there was something wrong with my code?
I would be appreciated if anyone can help."here is a reproducible test case in the SCOTT schema - if it reproduces for you open an SR with Oracle"
Yes. I have just followed the instruction you posted and managed to re-produce the same error I reported earlier. Here are the details of the script output following your posted instruction:
table MYTABLE dropped.
purge recyclebin
table MYTABLE created.
1 rows inserted.
table MYTABLE dropped.
OBJECT_NAME ORIGINAL_NAME OPERATION TYPE TS_NAME CREATETIME DROPTIME DROPSCN PARTITION_NAME CAN_UNDROP CAN_PURGE RELATED BASE_OBJECT PURGE_OBJECT SPACE
BIN$6NCDTxmXTb2QBpUWF0kGqw==$0 SYS_C0010655 DROP INDEX USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030808 NO YES 98789 98789 98812 8
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:30:56 2013-12-01:10:32:17 4030812 NO NO 98790 98789 98789 0
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 NO NO 98833 98789 98789 1024
BIN$Ka0sgN7XRBeCsjyXTQ76cA==$0 MYTABLE DROP Table Composite Partition USERS 2013-12-01:10:31:37 2013-12-01:10:32:17 4030812 & -
A bug in MergedDictionaries and ThemeDictionaries.
Hi everyone. I found a bug when developing in .NET for WinRT with XAML. To reproduce the bug, follow these steps:
(1) Create an empty Windows Store app with XAML, call it "ReproduceDictIssue".
(2) Place the following AppBarButton on the main page so that you can see the effect of other steps:
<AppBarButton Icon="Accept"/>
(3) Run the app to see a dark themed app page with a white AppBarButton. Close the app.
(4) Replace App.xaml with the following code: (If you didn't name the project as ReproduceDictIssue, you might want to change the corresponding part so that the code works correctly for you)
<Application
x:Class="ReproduceDictIssue.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:ReproduceDictIssue"
RequestedTheme="Dark">
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Dark">
<SolidColorBrush x:Key="AppBarItemForegroundThemeBrush" Color="Red"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>
</Application.Resources>
</Application>
(5) Run the app, now the app bar button is red. NOTE that the "Accept" (tick) icon is also red. Close the app.
(6) Replace the App.xaml with the following code:
<Application
x:Class="ReproduceDictIssue.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:ReproduceDictIssue"
RequestedTheme="Dark">
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Dark">
<SolidColorBrush x:Key="AppBarItemForegroundThemeBrush" Color="Red"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Application.Resources>
</Application>
That is, merge a dictionary to it.
(7) Run the app. No difference here. Close the app.
(8) Replace App.xaml with the following code:
<Application
x:Class="ReproduceDictIssue.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:ReproduceDictIssue"
RequestedTheme="Dark">
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Dark">
<SolidColorBrush x:Key="AppBarItemForegroundThemeBrush" Color="Red"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary/>
</ResourceDictionary.MergedDictionaries>
<Style TargetType="AppBarButton">
<Setter Property="Foreground" Value="{ThemeResource AppBarItemForegroundThemeBrush}"/>
</Style>
</ResourceDictionary>
</Application.Resources>
</Application>
That is, add a default AppBarButton style to it.
(9) Run the app. NOTE that the "Accept" (tick) icon is now WHITE, which is indeed the unoverriden value. Close the app.
(10) Remove this line:
<Setter Property="Foreground" Value="{ThemeResource AppBarItemForegroundThemeBrush}"/>
(11) Run the app. Now the color is correct again. Close the app.
(12) Add the line back:
<Setter Property="Foreground" Value="{ThemeResource AppBarItemForegroundThemeBrush}"/>
And remove this line (this line is in the ResourceDictionary.MergedDictionaries):
<ResourceDictionary/>
(13) Run the app. The color is still correct. Close the app.
(14) Replace App.xaml with the following code:
<Application
x:Class="ReproduceDictIssue.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:ReproduceDictIssue"
RequestedTheme="Dark">
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Dark">
<SolidColorBrush x:Key="AppBarItemForegroundThemeBrush" Color="Red"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>
<!-- <ResourceDictionary/> -->
</ResourceDictionary.MergedDictionaries>
<Style TargetType="AppBarButton">
<Setter Property="Foreground" Value="{ThemeResource AppBarItemForegroundThemeBrush}"/>
</Style>
</ResourceDictionary>
</Application.Resources>
</Application>
That is, move the theme dictionary into the merged dictionary.
(15) Run the app. The color is still correct. Close the app.
(16) Now uncomment this line:
<!-- <ResourceDictionary/> -->
(17) Run the app. The color of "Accept" (tick) icon is white again.
Conclusion: there is a problem in theme resource look-up. For a Style's Setter, the look-up might be in the following order:
Last meged dictionary -> second last -> ... -> first merged dictionary -> specific theme dictionary(like Light/Dark/HighContrastWhite) -> general theme dictionary(like Default) -> programmer-specified
entries -> system entries.
Note that the bold part is recursive. Due to this behaviour, the following dictionary:
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Dark"> <!-- 2 -->
<SolidColorBrush x:Key="AppBarItemForegroundThemeBrush" Color="Red"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary/> <!-- 1 -->
</ResourceDictionary.MergedDictionaries>
<!-- 3 -->
<Style TargetType="AppBarButton">
<Setter Property="Foreground" Value="{ThemeResource AppBarItemForegroundThemeBrush}"/>
</Style>
</ResourceDictionary>
resolves the ThemeResource AppBarItemForegroundThemeBrush in the following steps:
It first asks dictionary 1 to resolved ThemeResource AppBarItemForegroundThemeBrush.
Dictionray 1 has no merged dictionaries, and has no theme dictionaries. It looks at programmer-specified entries, which is empty. So it asks for system entries. Note that dictionary 1
does not have AppBarItemForegroundThemeBrush overriden, so it returns the system default value for dark theme, which is white.
The large dictionary now finishes the look-up with the value returned from dictionary 1, which is white.
In that process dictionary 2 is never asked to look up AppBarItemForegroundThemeBrush.
We might find that this process' behaviour is out of most people's expectation. ResourceDictionary should
suppress system default resources when doing a look-up that is in a larger dictionary's recursive part. Actually
IT DOES. The proof is that if the resource is not in a Setter, it has the expected behaviour. Like in a control template or anything.
You might have noticed that the circle of the app bar button is pink and if the setter is removed, the color of "Accept" (tick) icon is correct. That'll be the proof.
Thanks for reading this bug report. I hope someone from Microsoft can follow this bug and remove the bug in future releases of .NET for WinRT with XAML. Thanks again, everyone.Looks like the same issue for this thread:
https://social.msdn.microsoft.com/Forums/en-US/85a36b0a-9a19-4921-8dac-3a906067ee1b/overriding-implict-style-breaks-themeresource-references?forum=winappswithcsharp
You have Failed this City --Arrow :)
yes exactly! A workaround is to completely override the template so that it uses a custom resouce key, which does not fail since no system default resource is included. -
Is this a bug in packaging luakit?
I've recompiled webkitgtk2 without geoclue support, as it provided nothing (for me) but a bunch of extra dependancies. After compiling and installing was finished I have done "pacman -Rsn $(pacman -Qdtq) which removed geoclue with all its dependancies (enchant for example). It turned out luakit is hard dependant on (lib)enchant and was unable to run until I installed enchant.
Luakit is dependant on libenchant (part of enchant package) but if you check what pacman says about luakit's dependancies you won't find it.
Should I open a bug report as webkitgtk (one of the packages in question) is not the same on my machine and in the extra repo?Pacman doesn't list dependencies recursively.
Most packages (including luakit) have a lot more dependencies than are listed when you do a 'pacman -Si <pkg>'. Each dependency listed by pacman will likely depend on other packages, and those packages will depend on still other packages, etc. etc.
I believe luakit has over 150 actual dependencies. -
How do I submit a bug to microsoft for TEE
I'm not sure but I think I may have found a bug in TEE on Linux. The
question on stackoverflow was answered but I've just run into the same problem when trying to merge our trunk to my branch. In short, the root of the problem is that was a directory named
Directory. Over the course of time, our naming conventions changed and this directory became named
directory (the case change). Interestingly, two files (a *.cpp and *.h) somehow were still associated with the first directory name though the directory named was changed and preserved the new case. The problems should be apparent for Linux
users.
Now, as the link I provided mentions, I did try to fix this by using the tf rename command, but after the command was executed to rename the first file, I could do nothing more using TEE because of this error:
java.io.FileNotFoundException: /home/andy/devel/project/directory/code.cpp (No such file or directory). Ok, I just moved on. A coworker executed the very same command from a Visual Studio Command Prompt and it worked and he was
able to check in the named files. "Great," I thought, I'll merge with main and continue. I did so using the
tf merge -recursive <source_branch> <dest_branch> and promptly ran into the same problem when trying to do
tf status and any other tf command.
It seems that there might be a bug in how metadata is handled in this case when using TEE in Linux. How can I submit this to Microsoft?http://www.apple.com/feedback/itunesapp.html
https://discussions.apple.com/thread/4553482 - "Many albums' artwork doesn't display in my Library. However once I pull up the info window for one track, and return to the album list, the artwork magically appears."
"...some album covers were missing in album view. Apple says this is a rare bug, but one it has solved and will fix in a minor update soon." - http://allthingsd.com/20121204/itunes-gets-an-upgrade-without-missing-a-beat/
Maybe you are looking for
-
I am currently taking the course "C# Fundamentals" from Bob Tabors site: LearnVisualStudio.NET . I'm on day 8 where we are integrating SQL Server with C#. In Lesson 6 and 7, (havent' gotten past these two) I am having an issue. In Lesson 6, "Retr
-
My phones audio/sound has completely stopped working. No ringer, no music, no sound what so ever. Does anyone know why and how to fix it?
-
Additional Results option ignores \n
When including Additional results in my test stand sequence file, i save the complete listing of a text file to this option. Problem is, in the result file(html) it displayes it without any carriage returns and line feeds...resulting in a very messy
-
Hi All, We have a single primary ledger and 5 different operating units and the balancing segment values for each are defined. I want to understand how the retained earnings will be updated as in Accounting setup we define one balancing segment v
-
Why Won't my iTunes 10.5 open?, Why Won't my iTunes 10.5 open?
My iTunes won't open! I Have a PC Running Windows 7 Ultimate 64-bit and my itunes will not open AT ALL! it has been like this for a few days, since the new iTunes update! PLEASE HELP ME! THANKS Nick.