Multiple TC's in workgroup situation?
Is it possible to have multiple TC's in a room (3-5), each one providing backup of 3-4 Macs per TC?
Yes. Turn off wireless on each TC that you want to act as a switch, then turn off DHCP (run in bridge mode). Connect each TC's WAN port to one of the main router's LAN ports (or the LAN port of any TC further up the line) and voila! All will be working on the same subnet with each getting their IP, as well as the IP of subsequent devices connected to them, from the main router. All Macs will see the HDDs of the TCs and be able to access them without a problem. You might want to give them unique names tho (both the TCs and the HDDs) to avoid confusion and possible problems.
More info on setting up TCs in bridge mode; just do a "bridge mode" search in this pdf:
http://manuals.info.apple.com/enUS/Designing_AirPort_Networks10.5-Windows.pdf
Similar Messages
-
Stuck in a complicated situation
I have to come up with a query for report which is dependent on what is selected in the 7 drop boxes.
startdate: enter text
enddate: enter text
Department: Deptname
Group:drop down (values like All, Supervisor, Manager, etc)
Classification:drop down (all drop downs have values like above)
Shift:drop down
Report Category:drop down
Overtime Type:drop down
Sort Type:drop down
So a user can select "all" job types or a particular one like "carpenter"..
If the person selects "all" then I dont want to put a where clause or have a condition/restriction in my query ..
I am not sure how I can have a query for this situation without multiple if elses for each situation.. Any ideas?
I will have to say if group = "All" then dont include condition
if group = "All" and Shift="A''" then dont include both
if reportcategory ="All" and ... and so many conditions??? Is there a way out of this complication?rephrasing the question:
I have select * from a,b,c where .........
Now I want to add not only d_column = "1" but also "d"
itself after from so that it becomes:
select * from d,a,b,cThen you just have more strings to dynamically generate. Example:
String fromList = "a,b,c";
String columnList = "x1,x2,y3";
String whereClause = "someColumn=? AND someOtherColumn >= ?...";
if (someCondition) {
fromList += ",d";
whereClause += " AND d_column=1";
columnList += "d_field1,d_field2";
String sql = "SELECT " + columnList + " FROM " + fromList + " WHERE " + whereClause;
... -
How to handle multiple inbound interfaces with WSDL messages
Hi All,
We have a synchronous: Abap Proxy -> XI -> WebService Scenario. The webservice has multiple SoapActions e.g. SearchForProduct_WithX, SearchForProduct_WithY each with different message types. We have tried to use the receiver determination to send the request to the correct soapaction using conditions e.g. if field X in the request is populated use SearchForProduct_WithX action/message.
But when we run it through the proxy we get this error:
<CODE>IF_DETERMINATION.TOO_MANY_IIFS_CASE_BE</CODE>
<ERRORTEXT>Multiple inbound interfaces not supported for synchronous calls</ERRORTEXT>
Does anybody know how we can get around this or how best to deal with the multiple soap actions per wsdl situation.Hi Yaghya,
We have used conditions in the Interface Determination. Interestingly if we use an HTTP sender adapter we can use this configuration ... but once we try and use ABAP proxies we get the previous error.
Another related question ... when we use the http adapter we get a connection time out exception. Same thing happens if we try and use the wsdl tester at /wsnavigator but we can open the wsdl through the browser. Any idea on this one?
Thanks for all your help. -
Hi,
I would like to know if i can update (merge) multiple line at once.
typical situation : i have a master(form)-detail(table) page, i can modify one field in the detail table for each row
i've made a "merge" button which update one row in db ( using mergeStudent(Student) databinding command
i use the binding data of selected row to do this and it work very well. ( currentRow.dataprovider )
now what the best ( and easier) solution to update ( merge ) every row of this table
i've thought that i can create a "mergeStudents(list<Students>" function in my Ejb but i don't know if there s a binding object which send the full list
Thanks and sorry for my english.Select C1,C2, C3,C4,C5 FROM
(SELECT C1,C2,Max(C3) Over() as C3,C4,C5 FROM yourtable) t
WHERE C4<>'0' -
How multiple instances license is calculated?
Hi
I got bit confused about license. SQL Server license is per core based, for instance standard edition in 2 core physical machine is 2*$1793, correct me if I am wrong.
1. Does cost remain same if I install additional instances on the same physical machine?
2. In case of multiple cluster instances.
2. Does increasing node in cluster increases license cost? I believe so.
Please refer Microsoft link, if possible.
ThanksI think don’t need to buy multi license for multi instances on one server. You
can run multiple instances of SQL Server 2005 on a single computer. Multiple instances are used by organizations that have several applications running on a server but want them to run in isolation so that any problem in one instance will not affect the other
instances. In SQL Server 2005, you can now run multiple instances with the Workgroup, Standard, and Enterprise editions when they are licensed server/CAL or on a per-processor basis. Here is as list for Pricing and Licensing FAQ:
http://www.microsoft.com/sqlserver/2005/en/us/pricing-licensing-faq.aspx#,
here is a thread about Licensing:
Also http://social.msdn.microsoft.com/Forums/en-US/sqlreportingservices/thread/a5fd2fb7-dc2f-4736-85b9-1eb581e56a23
http://www.microsoft.com/sql/howtobuy/default.mspx
Raju Rasagounder Sr MSSQL DBA -
We are using OWB repository 10.2.0.2.0 and OWB client 10.2.0.2.8. The Oracle version is 10 G (10.2.0.2.0). OWB is installed on Sun 64 bit server.
As we use lookup in OWB mapping, We have a situation to create lookup from same table for different results in same OWB map. Here is the situation.
1) Table Ltab
Lookup key = sourcekey1
and lookupcode in ( 'A', 'M')
2) Table Ltab
Lookup key = sourcekey1
and lookupcode in ( 'K', 'V')
We can use ( lookupcode= 'A' OR lookupcode = 'M') instead lookupcode in ( 'A', 'M') as well.
I do not see a way to code as above in OWB lookup operator.
Is it doable in OWB via lookup operator?
Alternatevely, we could create multiple views to support above situation and attach the corresponding views to lookup.
Did any one in this forum use above approach in large projects?
Any idea?
Thanks in advance.
RIHi,
I suggest using a joiner operator instead of the lookup. The lookup operator generates a left outer join anyway and in the join condition you have much more flexibility.
I would not recommend using views, since this splits your etl logik into two different locations.
Regards,
Carsten. -
I want to transfer a product from 1 adobe ID to another adobe ID
The wonderful experience with Adobe:
step 1: years ago I created a behance account
step 2: 1 year ago i created an adobe ID for CS6
step 3: today i logged into behance and thereby created a SECOND adobe ID
step 4: i tried to cancel my CS6 adobe ID with a 25-minute phone call (with a very bad reception) - did not work because CS 6 is connected to the adobe ID I try to cancel
step 5: try to transfer the CS6 product to the other ID, but i am both the sender and the recipient, had to fill in two forms and pretend to be a different person, and I now have had more than enough of this crap
step 6: Adobe get your act together. It is bad enough that in this era we need 100 accounts for our online business, but for Adobe I need to have two accounts that are both worthless?!
Did Adobe really not think about people that have multiple email addresses, causing this situation?
Can Adobe not just make this go away by merging the accounts?
Your products and your services are very user-unfriendly. Maybe you can try to work on that.Transfer an Adobe product license
-
I have just started to get into iSO App development. I mostly create websites so I'm thinking about taken that knowledge over to iSO.
Basically I would be making simple light weight apps, no games or anything like that. So my question is which would be the best option for a first time mac buyer to go with?
I been looking at the MacBook Pro 13-inch: 2.3 GHz and the MacBook Air 11-inch : 128GB.
I really like the MacBook Air but I want to know if it would be enough to do the things i wish to do.Hi Armor
Welcome to Apple Discussions.
I recall 20 years ago working on projects where build times were measured in minutes and hours! That taught you patience, and also gave plenty of opportunity for thinking and coffee. On any modern computer, it's hard to conceive of a project so large and complex that it will require more than a few seconds to perform a build, which means that you don't really need to consider how powerful is its processor, or how much memory it has, since any reasonable system will perform well.
That means that your choice of options depends upon other factors. I'd avoid the MB Air because of its lack of connectivity, and it has no CD/DVD drive. My own choice of 13" MB works well for development, but from time to time I do wish for a larger screen; however, I find that careful layout of windows using multiple Spaces can ease the situation somewhat. If you go for a larger screen notebook, the computer gets heavier and more unwieldy. A number of my colleagues who opted for 17" MBPs a few years ago have often complained about the weight, and at least one of them has "downsized" to a smaller screen.
Bob -
'FOR ALL ENTRIES' in SELECT statements
Hi,
I got a doubt in working of the 'FOR ALL ENTRIES' option in the SELECT statement. Here is my scenarion.
Table A - Document Header Level (Key: Doc Number)
Internal Table B - Document Item level (Keys: Doc num and Doc Item).
So, for each record in Table A, table B will have multiple records.
In this situation, how the below SELECT will work.
SELECT <field names> INTO <some internal table>
FROM A
FOR ALL ENTRIES in B
WHERE doc_num = B-doc_num.
Will the above SELECT result in duplicate records or not?
(I tested it and found that it doesn't! I was lil surprised and wanted to confirm that)
Thanks & Regards,
SreeHi,
For all entries option basically sorts out the entries in the internal tbale based on the where condition and thus it only picks the unique entries based on the list.
so indeed your table A is a header one so it will give you only single value. if you go by the reverse way where in look for B for all entries in A it will give you multiple values as table B has multiple values for each value in A.
Regards,
Jagath -
Process Control in Confirmation Parameters
Dear All,
There is a Process control tab in SPRO'confirmation parameters for Orders'.
I know it is used for decoupling of Production process like GI,GR & activity posting.What are the basic settings required for the same.Plz guide..Hi,
Business Process:
If you are dealing with hundreds of components for your Production Order (like in Auto/Electronics Industry)
then executing the GI, GR and Activity Posting may consume a lot of System Resources.
If number of people doing the confirmations at number of locations (Multiple Plant scenario) then the situation will be even worse as far as the system load is concerned.
Etc....
With this Process Control we can handle the situation in a better way.
As you said there are couple ofoptions to do that:
1. Online (Then and there the Posting will happen, here you need to wait till the posting happens.)
2. in an Update (If you are working in update task the next dialog step is possible immediately)
3. in the Background (At later probably at the Night when the Load on the System is Very less)
For this you need to Schedule the Program CORUPROC as a Batch Job.
There are other Customization Settings also required to be made for this:
1. Define Time for Confirmation Processes (OPKC)
Here you define the how you are going to do your confimation with reference to GI,GR and Activity.
The Order Category for Production Order is 10, so you need to do your cutomization with reference to this.
2. Define Paralleling Type for Confirmation Processes (OPKB).
You need to define the No. of Paralleln tasks and maximum number of items per material document (for goods movements)
Now you have scheduled your CORUPROC say 5 times a day..
But if you want to do your confirmation immediately then and there then you can go with the Transaction : CO1P.
But this is to be used in rare cases..
Hope this clarifies..
Revert for further discussion..
Regards,
Siva -
Apache plug-in won't load balance requests evenly on cluster
I can't seem to get the Apache plug-in to actually do round-robin load balancing
of HTTP
requests. It does random-robin, as I like to call it, since the plug-in will usually
hit all the
servers in the cluster but in a random fashion.
I've got three managed servers:
192.168.1.5:8001 (WL6 on Linux) 192.168.1.2:8001 (WL6 on Linux) 192.168.1.7:8001
(WL6 on Linux)
Admin server on 192.168.1.7:7000 (WL6 on W2k)
My Apache server is 1.3.9 (RedHat SSL) on 192.168.1.52.
The log file for each servers has something like this:
####<Apr 19, 2001 1:18:54 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<main> <system> <> <000102> <Joined cluster cluster1 at address 225.0.0.5 on port
8001>
####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<ExecuteThread: '9' for queue: 'default'> <> <> <000127> <Adding
3773576126129840579S:192.168.1.2:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster> ####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster> <neptune>
<cluster1server1> <ExecuteThread: '11' for queue: 'default'> <> <> <000127> <Adding
-6393447100509727955S:192.168.1.5:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster>
So I believe I have correctly created a cluster, although I did not bother to
assign
replication groups for HTTP session replication (yet).
The Apache debug output indicates it knows about all three servers and I can see
it
doing the "random-robin" load balancing. Here is the output:
Thu Apr 19 00:20:53 2001 Initializing lastIndex=2 for a list of length=3 Thu Apr
19
00:20:53 2001 Init Srvr# [1] = [192.168.1.2:8001] load=1077584792 isGood=1077590272
numSk ip=134940256 Thu Apr 19 00:20:53 2001 Init Srvr# [2] = [192.168.1.5:8001]
load=1077584792 isGood=1077590272 numSk ip=134940256 Thu Apr 19 00:20:53 2001
Init Srvr# [3] = [192.168.1.7:8001] load=1077584792 isGood=1077590272 numSk
ip=134940256 Thu Apr 19 00:20:53 2001 INFO: SSL is not configured Thu Apr 19
00:20:53 2001 Now trying whatever is on the list; ci->canUseSrvrList = 1 Thu Apr
19
00:20:53 2001 INFO: New NON-SSL URL Thu Apr 19 00:20:53 2001 general list: trying
connect to '192.168.1.7'/8001 Thu Apr 19 00:20:53 2001 Connected to 192.168.1.7:8001
Thu Apr 19 00:20:53 2001 INFO: sysSend 320 Thu Apr 19 00:20:53 2001 INFO:
Reader::fill(): first=0 last=0 toRead=4096 Thu Apr 19 00:21:06 2001 parsed all
headers
OK Thu Apr 19 00:21:06 2001 Initializing lastIndex=1 for a list of length=3 Thu
Apr 19
00:21:06 2001 ###Response### : Srvr# [1] = [192.168.1.5:8001] load=1077584792
isGood=1077 546628 numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response###
: Srvr# [2] = [192.168.1.2:8001] load=1077584792 isGood=1077 546628
numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response### : Srvr# [3] =
[192.168.1.7:8001] load=1077584792 isGood=1077 546628 numSkip=1077546628 Thu Apr
19 00:21:06 2001 INFO: Reader::fill(): first=0 last=0 toRead=4096
Basically, the lastIndex=XXX appears to be random. It may do round-robin for 4
or 5
connections but then always it resorts to randomly directing new connections.
This is what the configuration looks like using the plug-in's
/weblogic?__WebLogicBridgeConfig URL:
Weblogic Apache Bridge Configuration parameters:
WebLogic Cluster List:
1.Host: '192.168.1.2' Port: 8001 Primary
General Server List:
1.Host: '192.168.1.2' Port: 8001
2.Host: '192.168.1.5' Port: 8001
3.Host: '192.168.1.7' Port: 8001
DefaultFileName: ''
PathTrim: '/weblogic'
PathPrepend: '' ConnectTimeoutSecs:
'10' ConnectRetrySecs: '2'
HungServerRecoverSecs: '300'
MaxPostSize: '0'
StatPath: false
CookieName: JSESSIONID
Idempotent:
ON FileCaching:
ON ErrorPage: ''
DisableCookie2Server: OFF
Can someone please help to shed some light on this? I would be really grateful,
thanks!
JeffRight - it means that the only configuration which can do perfect round-robin is a
single plugin (non-Apache, or single-process Apache) - all others essentially do random
(sort of, but it can skew test results during first N requests).
Robert Patrick <[email protected]> wrote:
Dimitri,
The way Apache works is that is spawns a bunch of child processes and the parent process
that listens on the port delegates the processing of each request to one of the child
processes. This means that the load-balancing dome by the plugin before the session ID is
assigned does not do perfect round-robining because there are multiple copies of the plugin
loaded in the multiple child processes. This situation is similar to the one you would get
by running multiple proxy servers on different machines with the NES/iPlanet and IIS
plugins.
As I pointed out in my response to Jeff, attempting to address this problem with IPC
machanisms would only solve the single machine problem and most people deploy multiple
proxy servers to avoid a single point of failure...
Hope this helps,
Robert
Dimitri Rakitine wrote:
Hrm. This is strange - I thought that all the information nesessary for a
'sticky' load-balancing (primary/secondary) is contained in the cookie/session info,
so, the particular plug-in implementation should not make any difference. For
load-balancing - statistically, given large enough sampling base, Apache plug-in
should perform just a well as NS one (unless apache is somehow misconfigured and
calls fork() for each new request).
Jeff Calog <[email protected]> wrote:
Robert,
Thanks for the sanity reply, you are definitely right. I used Netscape 3.0 on
Win2k and it did perfect round-robin load balancing to my servers.
<raving>
BEA - ARE YOU LISTENING? STOP TELLING PEOPLE YOUR APACHE PLUG-IN IS A VIABLE
LOAD BALANCING SOLUTION! It's worthless for load balancing!
</raving>
In some tests, as many as 90% of my connections/requests would be sent to a single
server. There should be something in the release notes like "By the way, the
Apache plug-in is only advertised as doing round-robin load balancing, in reality
it doesn't work worth a darn".
I'm surprised they don't used shared memory or some other technique (pipes, sockets,
signals, writing to /tmp, anything) for interprocess communication to fix that.
Jeff
Robert Patrick <[email protected]> wrote:
Yes, the problem lies in the fact that Apache uses multiple processes
instead of
multiple threads to process requests. Therefore, you end up with multiple
processes all
with the WebLogic plugin loaded into them (and they cannot see one another)...
Hopefully, Apache 2.0 when it comes out will allow the plugin to do a
better job...
Jeff Calog wrote:
I can't seem to get the Apache plug-in to actually do round-robin loadbalancing
of HTTP
requests. It does random-robin, as I like to call it, since the plug-inwill usually
hit all the
servers in the cluster but in a random fashion.
I've got three managed servers:
192.168.1.5:8001 (WL6 on Linux) 192.168.1.2:8001 (WL6 on Linux) 192.168.1.7:8001
(WL6 on Linux)
Admin server on 192.168.1.7:7000 (WL6 on W2k)
My Apache server is 1.3.9 (RedHat SSL) on 192.168.1.52.
The log file for each servers has something like this:
####<Apr 19, 2001 1:18:54 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<main> <system> <> <000102> <Joined cluster cluster1 at address 225.0.0.5on port
8001>
####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster> <neptune> <cluster1server1>
<ExecuteThread: '9' for queue: 'default'> <> <> <000127> <Adding
3773576126129840579S:192.168.1.2:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster> ####<Apr 19, 2001 1:19:31 AM MDT> <Info> <Cluster><neptune>
<cluster1server1> <ExecuteThread: '11' for queue: 'default'> <> <><000127> <Adding
-6393447100509727955S:192.168.1.5:[8001,8001,7002,7002,8001,7002,-1]:192.168.1.52
to the cluster>
So I believe I have correctly created a cluster, although I did notbother to
assign
replication groups for HTTP session replication (yet).
The Apache debug output indicates it knows about all three serversand I can see
it
doing the "random-robin" load balancing. Here is the output:
Thu Apr 19 00:20:53 2001 Initializing lastIndex=2 for a list of length=3Thu Apr
19
00:20:53 2001 Init Srvr# [1] = [192.168.1.2:8001] load=1077584792 isGood=1077590272
numSk ip=134940256 Thu Apr 19 00:20:53 2001 Init Srvr# [2] = [192.168.1.5:8001]
load=1077584792 isGood=1077590272 numSk ip=134940256 Thu Apr 19 00:20:532001
Init Srvr# [3] = [192.168.1.7:8001] load=1077584792 isGood=1077590272numSk
ip=134940256 Thu Apr 19 00:20:53 2001 INFO: SSL is not configured ThuApr 19
00:20:53 2001 Now trying whatever is on the list; ci->canUseSrvrList= 1 Thu Apr
19
00:20:53 2001 INFO: New NON-SSL URL Thu Apr 19 00:20:53 2001 generallist: trying
connect to '192.168.1.7'/8001 Thu Apr 19 00:20:53 2001 Connected to192.168.1.7:8001
Thu Apr 19 00:20:53 2001 INFO: sysSend 320 Thu Apr 19 00:20:53 2001INFO:
Reader::fill(): first=0 last=0 toRead=4096 Thu Apr 19 00:21:06 2001parsed all
headers
OK Thu Apr 19 00:21:06 2001 Initializing lastIndex=1 for a list oflength=3 Thu
Apr 19
00:21:06 2001 ###Response### : Srvr# [1] = [192.168.1.5:8001] load=1077584792
isGood=1077 546628 numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response###
: Srvr# [2] = [192.168.1.2:8001] load=1077584792 isGood=1077 546628
numSkip=1077546628 Thu Apr 19 00:21:06 2001 ###Response### : Srvr#[3] =
[192.168.1.7:8001] load=1077584792 isGood=1077 546628 numSkip=1077546628Thu Apr
19 00:21:06 2001 INFO: Reader::fill(): first=0 last=0 toRead=4096
Basically, the lastIndex=XXX appears to be random. It may do round-robinfor 4
or 5
connections but then always it resorts to randomly directing new connections.
This is what the configuration looks like using the plug-in's
/weblogic?__WebLogicBridgeConfig URL:
Weblogic Apache Bridge Configuration parameters:
WebLogic Cluster List:
1.Host: '192.168.1.2' Port: 8001 Primary
General Server List:
1.Host: '192.168.1.2' Port: 8001
2.Host: '192.168.1.5' Port: 8001
3.Host: '192.168.1.7' Port: 8001
DefaultFileName: ''
PathTrim: '/weblogic'
PathPrepend: '' ConnectTimeoutSecs:
'10' ConnectRetrySecs: '2'
HungServerRecoverSecs: '300'
MaxPostSize: '0'
StatPath: false
CookieName: JSESSIONID
Idempotent:
ON FileCaching:
ON ErrorPage: ''
DisableCookie2Server: OFF
Can someone please help to shed some light on this? I would be reallygrateful,
thanks!
Jeff
Dimitri--
Dimitri -
I am facing a peculiar situtation in our multiple forms application. The situation is like this-
In form A I have a text field named TEXT1 with data type as number.I navigate from form A to Form B and perform some update in Form B. After this I come back to Form A and type a number in the field TEXT1. The first digit that I type in this field almost always disappears. If I do not perform any update in form B this does not happen and things work fine. I am messing up something that I am not aware of.
Does any body have any clue?
Thanks for help.I've experienced this peculiarity sporadically as well. From what I can tell, sometimes it does this: you navigate to the text item, you enter first char, magically this char gets selected, next char you write replaces selection and from there on everything seems normal.
I think it's a bug in the java gui part, and some of these things have gotten better with new Jinitiators. 1.3.1.8 on Unix was horrible, 1.3.1.9 (windows) was better and I don't see it happening (much?) with 1.3.1.13 (also on windows).
Some people tend to experience it more often than others. I've hypothesized somewhat about mouse drivers and that sort of thing ...?
Regards,
Jesper Vad Kristensen -
Sharing of stories across lumira servers
Hi,
Is there any way we could consume stories created on one lumira server in another lumira server.
Example: Created a story in one lumira server, publish it to BOE, connect another lumira server to BOE and pull back the story(.lums) file back to the 2nd lumira server.
Could this be possible?
Thanks
SambitNot as of yet. I kind of want it myself too but for a different reason.
What I am interested to understand is how typical in business environments you would have multiple servers and in what situations you would try to publish from Lumira Server to BI. That way we can have meaningful discussions with engineering. If you could please submit these details on how such capability can benefit your business on to Idea Place, that will be really appreciated.
Thank you. -
PS CS5 : Grid erratic display with OpenGL
Hi, I am wondering if I am the only one having trouble with the grid display ?
When OpenGL is active the grid partially disappear if I zoom at 300% and disappear even more when zooming further.
I use PS for web design and really need my pixel grid. With CS3 I was using a 10px dotted grid with 10 subdivisions.
I have an iMac 24" and a brand new 13" and both have the exact same grid behavior.
Every power feature enabled by OpenGL works perfectly and I love it but then I don't have my grid.
I tweaked and tried different options but without much succes. Any ideas how I could fix this while waiting for a patch ?
PS: Why oh why did the PS team include a display pixel grid option if you can't have a snap to pixel option too ?If I'm reading you correctly, there are multiple graphics interfaces present in this system?
Hm, I wonder how, with multiple GPUs in the system, Adobe chooses the one to use... I think there's language somewhere that says Adobe just doesn't support multiple GPUs.
Does the situation persist if you boot up while docked?
-Noel -
Same table, Oracle 5 times slower than MySQL
Hi
I have several sites with the same aplication using a database as a log device and to later retrieve reports from. Some tables are for setup and one are for all the log data. The log data table has the following columns: LINEID, TAG, DATE_, HOUR_, VALUE, TIME_ and CHANGED. Typical data is: 122345, PA01_FT1_ACC, 2008-08-01, 10, 985642, "", 0.
Index (TAG,DATE_)
When calling a report the software querys for typical 3-5 select querys like the following, only different TAG: SELECT * FROM table WHERE TAG='PA01_FT1_ACC' AND DATE_ BETWEEN '2008-08-01' AND '2008-08-31' AND HOUR_=24
Since our customers have different preferences some sites have Oracle and some have MySQL. And I have registered that the sites running Oracle uses 24-30 sec on the report, MySQL uses 3-6 sec on a similar report with the same tables and querying software.
How is this?
Is there anything I can do to make Oracle work faster?
Should HOUR_ also be in the index?
Since I guess this slowness is not something consistant in Oracle, there must be something to do.
Thanks for any help.Histograms on varchar2 columns are based on the
first 6 bytes of the column. If the database is using
a character set that uses 1 byte per character, every
entry in the DATE_ column since the beginning of the
year looks like '2008-0' to the optimizer when
determining cardinality to produce the "best"
execution plan. For character sets that require
multiple bytes per character, the situation is worse
- every entry in the column representing this century
appears to be the same value to the optimizer when
determining cardinality
That's a very good point and I didnt know about it
before, about first 6 bytes being used. Can you point
me in the docs where it is listed if its there or
some other document/s which has this detail?Aman,
I am having a bit of trouble finding the information in the documentation about the number of bytes used by a histogram on a VARCHAR2 column.
References:
http://www.freelists.org/archives/oracle-l/08-2006/msg00199.html
"Cost-Based Oracle Fundamentals" page 117 shows a demonstration, and describes the use of ENDPOINT_ACTUAL_VALUE starting on Oracle 9i.
"Cost-Based Oracle Fundamentals" page 118-120 describes selectivity problems when histograms are not used and a date is placed into a VARCHAR2 column.
"Troubleshooting Oracle Performance", likely around page 130-140 also indicates that histograms only use the first 6 bytes.
See section "Followup November 12, 2005 - 4pm US/Eastern"
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563
An interesting test setup that almost shows what I intended - but Oracle 10.2.0.2 was a little smarter than I expected, even though it selected to use an index to retrieve more than 50% of a table... Take a look at the TO_CHAR representation of the ENDPOINT_VALUE from DBA_TAB_HISTOGRAMS to understand what I was trying to decribe in my original post in this thread.
CREATE TABLE T1 (DATE_ VARCHAR2(10));
INSERT INTO T1
SELECT
TO_CHAR(TO_DATE('2008-01-01','YYYY-MM-DD')+ROWNUM-1,'YYYY-MM-DD')
FROM
DUAL
CONNECT BY
LEVEL<=250;
250 rows created.
COMMIT;
CREATE INDEX IND_T1 ON T1(DATE_);
SELECT
MIN(DATE_),
MAX(DATE_)
FROM
T1;
MIN(DATE_) MAX(DATE_)
2008-01-01 2008-09-06
SELECT
COLUMN_NAME,
NUM_DISTINCT,
NUM_BUCKETS,
HISTOGRAM
FROM
DBA_TAB_COL_STATISTICS
WHERE
OWNER=USER
AND TABLE_NAME='T1';
no rows selected
SELECT
SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
ENDPOINT_NUMBER,
ENDPOINT_VALUE,
SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
FROM
DBA_TAB_HISTOGRAMS
WHERE
OWNER=USER
AND TABLE_NAME='T1';
no rows selected
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',METHOD_OPT=>'FOR COLUMNS SIZE 254 DATE_',CASCADE=>TRUE);
PL/SQL procedure successfully completed.
SELECT
COLUMN_NAME,
NUM_DISTINCT,
NUM_BUCKETS,
HISTOGRAM
FROM
DBA_TAB_COL_STATISTICS
WHERE
OWNER=USER
AND TABLE_NAME='T1';
COLUMN_NAME NUM_DISTINCT NUM_BUCKETS HISTOGRAM
DATE_ 250 250 HEIGHT BALANCED
SELECT
SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
ENDPOINT_NUMBER,
ENDPOINT_VALUE,
SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
FROM
DBA_TAB_HISTOGRAMS
WHERE
OWNER=USER
AND TABLE_NAME='T1'
ORDER BY
ENDPOINT_NUMBER;
COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_A
DATE_ 1 2.6059E+35 2008-01-01
DATE_ 2 2.6059E+35 2008-01-02
DATE_ 3 2.6059E+35 2008-01-03
DATE_ 4 2.6059E+35 2008-01-04
DATE_ 5 2.6059E+35 2008-01-05
DATE_ 6 2.6059E+35 2008-01-06
DATE_ 7 2.6059E+35 2008-01-07
DATE_ 8 2.6059E+35 2008-01-08
DATE_ 9 2.6059E+35 2008-01-09
DATE_ 10 2.6059E+35 2008-01-10
DATE_ 243 2.6059E+35 2008-08-30
DATE_ 244 2.6059E+35 2008-08-31
DATE_ 245 2.6059E+35 2008-09-01
DATE_ 246 2.6059E+35 2008-09-02
DATE_ 247 2.6059E+35 2008-09-03
DATE_ 248 2.6059E+35 2008-09-04
DATE_ 249 2.6059E+35 2008-09-05
DATE_ 250 2.6059E+35 2008-09-06
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
SELECT
DATE_
FROM
T1
WHERE
DATE_<='2008-01-15';
15 rows selected.
From the 10053 trace:
BASE STATISTICAL INFORMATION
Table Stats::
Table: T1 Alias: T1
#Rows: 250 #Blks: 5 AvgRowLen: 11.00
Index Stats::
Index: IND_T1 Col#: 1
LVLS: 0 #LB: 1 #DK: 250 LB/K: 1.00 DB/K: 1.00 CLUF: 1.00
SINGLE TABLE ACCESS PATH
Column (#1): DATE_(VARCHAR2)
AvgLen: 11.00 NDV: 250 Nulls: 0 Density: 0.002
Histogram: HtBal #Bkts: 250 UncompBkts: 250 EndPtVals: 250
Table: T1 Alias: T1
Card: Original: 250 Rounded: 15 Computed: 15.00 Non Adjusted: 15.00
Access Path: TableScan
Cost: 3.01 Resp: 3.01 Degree: 0
Cost_io: 3.00 Cost_cpu: 85607
Resp_io: 3.00 Resp_cpu: 85607
Access Path: index (index (FFS))
Index: IND_T1
resc_io: 2.00 resc_cpu: 49621
ix_sel: 0.0000e+000 ix_sel_with_filters: 1
Access Path: index (FFS)
Cost: 2.00 Resp: 2.00 Degree: 1
Cost_io: 2.00 Cost_cpu: 49621
Resp_io: 2.00 Resp_cpu: 49621
Access Path: index (IndexOnly)
Index: IND_T1
resc_io: 1.00 resc_cpu: 10121
ix_sel: 0.06 ix_sel_with_filters: 0.06
Cost: 1.00 Resp: 1.00 Degree: 1
Best:: AccessPath: IndexRange Index: IND_T1
Cost: 1.00 Degree: 1 Resp: 1.00 Card: 15.00 Bytes: 0
============
Plan Table
============
| Id | Operation | Name | Rows | Bytes | Cost | Time |
| 0 | SELECT STATEMENT | | | | 1 | |
| 1 | INDEX RANGE SCAN | IND_T1 | 15 | 165 | 1 | 00:00:01 |
Predicate Information:
1 - access("DATE_"<='2008-01-15')
INSERT INTO T1
SELECT
TO_CHAR(TO_DATE('2008-09-07','YYYY-MM-DD')+ROWNUM-1,'YYYY-MM-DD')
FROM
DUAL
CONNECT BY
LEVEL<=250;
COMMIT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',METHOD_OPT=>'FOR COLUMNS SIZE 254 DATE_',CASCADE=>TRUE);
PL/SQL procedure successfully completed.
SELECT
COLUMN_NAME,
NUM_DISTINCT,
NUM_BUCKETS,
HISTOGRAM
FROM
DBA_TAB_COL_STATISTICS
WHERE
OWNER=USER
AND TABLE_NAME='T1';
COLUMN_NAME NUM_DISTINCT NUM_BUCKETS HISTOGRAM
DATE_ 500 254 HEIGHT BALANCED
SELECT
SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
ENDPOINT_NUMBER,
TO_CHAR(ENDPOINT_VALUE) ENDPOINT_VALUE,
SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
FROM
DBA_TAB_HISTOGRAMS
WHERE
OWNER=USER
AND TABLE_NAME='T1'
ORDER BY
ENDPOINT_NUMBER;
COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_A
DATE_ 0 260592218925307000000000000000000000 2008-01-01
DATE_ 1 260592218925307000000000000000000000 2008-01-02
DATE_ 2 260592218925307000000000000000000000 2008-01-04
DATE_ 3 260592218925307000000000000000000000 2008-01-06
DATE_ 4 260592218925307000000000000000000000 2008-01-08
DATE_ 5 260592218925307000000000000000000000 2008-01-10
DATE_ 6 260592218925307000000000000000000000 2008-01-12
DATE_ 7 260592218925307000000000000000000000 2008-01-14
DATE_ 8 260592218925307000000000000000000000 2008-01-16
DATE_ 9 260592218925307000000000000000000000 2008-01-18
DATE_ 10 260592218925307000000000000000000000 2008-01-20
DATE_ 242 260592219234792000000000000000000000 2009-04-26
DATE_ 243 260592219234792000000000000000000000 2009-04-28
DATE_ 244 260592219234792000000000000000000000 2009-04-29
DATE_ 245 260592219234792000000000000000000000 2009-05-01
DATE_ 246 260592219234792000000000000000000000 2009-05-02
DATE_ 247 260592219234792000000000000000000000 2009-05-04
DATE_ 248 260592219234792000000000000000000000 2009-05-05
DATE_ 249 260592219234792000000000000000000000 2009-05-07
DATE_ 250 260592219234792000000000000000000000 2009-05-08
DATE_ 251 260592219234792000000000000000000000 2009-05-10
DATE_ 252 260592219234792000000000000000000000 2009-05-11
DATE_ 253 260592219234792000000000000000000000 2009-05-13
DATE_ 254 260592219234792000000000000000000000 2009-05-14
SELECT
DATE_
FROM
T1
WHERE
DATE_ BETWEEN '2008-01-15' AND '2008-09-15';
245 rows selected.
From the 10053 trace:
BASE STATISTICAL INFORMATION
Table Stats::
Table: T1 Alias: T1
#Rows: 500 #Blks: 5 AvgRowLen: 11.00
Index Stats::
Index: IND_T1 Col#: 1
LVLS: 1 #LB: 2 #DK: 500 LB/K: 1.00 DB/K: 1.00 CLUF: 2.00
SINGLE TABLE ACCESS PATH
Column (#1): DATE_(VARCHAR2)
AvgLen: 11.00 NDV: 500 Nulls: 0 Density: 0.002
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 255
Table: T1 Alias: T1
Card: Original: 500 Rounded: 240 Computed: 240.16 Non Adjusted: 240.16
Access Path: TableScan
Cost: 3.01 Resp: 3.01 Degree: 0
Cost_io: 3.00 Cost_cpu: 148353
Resp_io: 3.00 Resp_cpu: 148353
Access Path: index (index (FFS))
Index: IND_T1
resc_io: 2.00 resc_cpu: 111989
ix_sel: 0.0000e+000 ix_sel_with_filters: 1
Access Path: index (FFS)
Cost: 2.01 Resp: 2.01 Degree: 1
Cost_io: 2.00 Cost_cpu: 111989
Resp_io: 2.00 Resp_cpu: 111989
Access Path: index (IndexOnly)
Index: IND_T1
resc_io: 2.00 resc_cpu: 62443
ix_sel: 0.48031 ix_sel_with_filters: 0.48031
Cost: 2.00 Resp: 2.00 Degree: 1
Best:: AccessPath: IndexRange Index: IND_T1
Cost: 2.00 Degree: 1 Resp: 2.00 Card: 240.16 Bytes: 0
============
Plan Table
============
| Id | Operation | Name | Rows | Bytes | Cost | Time |
| 0 | SELECT STATEMENT | | | | 2 | |
| 1 | INDEX RANGE SCAN | IND_T1 | 240 | 2640 | 2 | 00:00:01 |
Predicate Information:
1 - access("DATE_">='2008-01-15' AND "DATE_"<='2008-09-15')I am sure that there are much better examples than the above, as the above generates a very small data set, and is still an incomplete test setup.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Maybe you are looking for
-
Contract determination in service order for PPR items
Hi Experts, Contract data is not being determined for some items in service order. All these items are assigned to product list in service contract using product range or PPR. 1) PPR is created with the type "sales contract" 2) In the categories sect
-
Signal losses after restoring. Why?
Hello, I currently have a problem with my iPhone5c's signal, bought from September last year. It suddenly started a few days ago. My signal on the phone keeps on failing. It keeps on saying "Searching-" and it sometimes catches the signal and would s
-
Not able to get proper report from PBAQ Tr. Code
Hi HR experts!! I was trying to get the report of technical skilled resumes which is already stored in the SAP System, but I am not getting the proper report if I change the status of the position to other than "in-process
-
Archive data of using DART : Job lock problem in table TSP01
Hi , I'm facing problem while archiving from Production system to UNIX using DART. Using TC: FTW1A to data extract, once data has extracted, we need to do verifaction progess through TC. FTWE1(run a BG jobRTXWCHK4) and FTWD(BG Job RTXWCHK2). When I a
-
See Heading!