Preventing large-scale data hacks
Just thinking about the Anthem data hack in the database context, in all the workload groups and resource groups and query governor variables, there's no setting on the database or account or connection level to throttle the total rows returned, is there?
I guess an administrator-level password can always reset stuff, and audits can watch a lot of things, and someone can always remove a disk drive and run away with it, but right now even user-level, read-only logins can dump whole 80m row tables and joins.
Would it make sense to parameterize legal output row counts? Either put a hard limit on result sets, or a soft limit via increasing time delays for larger counts, etc? And I mean at the database level, most apps are already built with something
like these kinds of governors built-in.
Thanks,
Josh
But it's a basic principle of security that it can always be beaten, what you do is make it more costly. A single query and download that can be done in five minutes is different from needing to sit there for hours, and the hours provide time for
monitors to see and alert operations.
If there is anything hackers have plenty on their hands, it's time.
The idea could possibly be useful if you want to avoid excessive resource consumption from a stupid user. But for stealing data worth billions of dollars? I think most people can accept a little overtime to make that fortune.
Erland Sommarskog, SQL Server MVP, [email protected]
Similar Messages
-
Errors and exceptions in writing large binary data on sockets!!! urgent
hi
I am trying to write large binary data in the form of byte arrays on sockets.
Data is as large as 512KB(== 524288bytes) So i store the data (actually read from a file through FileInputStream ) and then write on the socket with lines like this
DataOutputStream dos =
new DataOutputStream(new BufferedOutputStream(sock.getOutputStream()));
dos.write(b);
/* suppose b is the arrayreference in which data is stored. sometimes i write with that offset + len function*/
dos.flush();
dos.close();
sock.close();
but the program is not stable: sometimes the whole 512KB is read on other side and sometimes less usually 64KB.
The program is unthreaded.
There is another problem : one side(reading or writing) sometimes gives error :
java.net.SocketException: Software caused connection abort: socket write error
please reply and reply soon and give ur suggestions
thankshi
I am trying to write large binary data in theform
of byte arrays on sockets.
Data is as large as 512KB(== 524288bytes) So istore
the data (actually read from a file through
FileInputStream ) and then write on the socketwith
lines like this
DataOutputStream dos =
new DataOutputStream(new
BufferedOutputStream(sock.getOutputStream()));
dos.write(b);
/* suppose b is the arrayreference in which datais
stored. sometimes i write with that offset + len
function*/
dos.flush();
dos.close();
sock.close();
but the program is not stable: sometimes the whole
512KB is read on other side and sometimes less
usually 64KB.
The program is unthreaded.
There is another problem : one side(reading or
writing) sometimes gives error :
java.net.SocketException: Software caused
connection abort: socket write error
please reply and reply soon and give ursuggestions
thanksUmm how are you reading the data on the other side?
some of your code snippet might help. Your writing
code seems ok. I've written a file transfer program
in a similar fashion and have successfully testing on
different platforms (AIX, AS400, Solaris, Windows,
etc) without any problems and without needing to set
the buffer sizes with files as large as 600MB and you
said you're testing this on the loopback?
Point here is you should never need to reset any of the default TCP options to get program correctness. The options are more for optimizations and fine tuning. If indeed you need to change the options to get your program to work, then you program wont be able to scale under different load. -
About shell scripts for large-scale automation of encoding tasks
in the user menu of Compressor, it said that we can use the command line to write shell scripts for large-scale automation of encoding tasks.
I would like to have more information about the shell script for compressor, is that any document link?
ThanksYou can use a script function to set-up a more secure environment that you call at the start of every admin script. This could be your main stamp album for stuff that can be moved there.
A few more stamps to add to the collection (be sure to read up on them before use):
1) reset the command hash
hash -r
2) prevent core dumps
ulimit -H -c0
3) set the IFS
4) clear all aliases (see unalias -a)
Also you can remove the ALL from sudo and add explicit commands to the the sudoers file. There's a lot of fine tuning you can do in sudoers - inc. env variables as teekay said.
But I'm no expert so best to check all of the above. -
Very-large-scale searching in J2EE
I'm looking to solve a very-large-scale searching problem. I am creating a site
where users can search a table with five million records, filtering and sorting
independantly on ten different columns. For example, the table might be five million
customers, and the user might choose "S*" for the last name, and sort ascending
on street name.
I have read up on a number of patterns to solve this problem, but anticipate some
performance issues. I'll explain below:
1) "Page-by-Page Iterator" or "Value List Handler"
In this pattern, it appears that all records that match the search criteria are
retrieved from the database and cached on the application server. The client (JSP)
can then access small pieces of the cached results at a time. Issues with this
include:
- If the customer record is 1KB, then wide search criteria (i.e. last name =
S*) will cause 1 GB transfer from the database server to app server, and then
1GB being stored on the app server, cached, waiting for the user (each user!)
to ask for the next 10 or 100 records. This is inefficient use of network and
memory resources.
- 99% of the data transfered from the database server will not by used ... most
users flip through a couple of pages and then choose a record or start a new search
2) Requery the database each time and ask for a subset
I haven't seen this formalized into a pattern yet, but the basic idea is this:
If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
records from the db. If the user asks for the next page, requery the database
and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
The query is re-performed, causing the Oracle server to do another costly "execute"
(bad on 5M records with sorting).
To solve this, I've beed trying to enhance the second strategy above by caching
the ResultSet object in a stateful session bean. Unfortunately, this causes a
"ResultSet already closed" SQLException, although I ensure that the Connection,
PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
seen this on newsgroups ... it appears that WebLogic is forcing the Connection
closed. If this is how J2EE and pooled connections work, then that's fine ...
there's nothing I can really do about it.
Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
it yet, but it wouldn't be a great solution as it would be using Oracle-specific
functionality (we are trying to be db-agnostic).
More information:
- BEA WebLogic Server 8.1
- JDBC: Oracle's thin driver provided with WLS 8.1
- Platform: Sun Solaris 5.8
- Oracle 9i
Any other ideas on how I can solve this issue?Michael McNeil wrote:
I'm looking to solve a very-large-scale searching problem. I am creating a site
where users can search a table with five million records, filtering and sorting
independantly on ten different columns. For example, the table might be five million
customers, and the user might choose "S*" for the last name, and sort ascending
on street name.
I have read up on a number of patterns to solve this problem, but anticipate some
performance issues. I'll explain below:
1) "Page-by-Page Iterator" or "Value List Handler"
In this pattern, it appears that all records that match the search criteria are
retrieved from the database and cached on the application server. The client (JSP)
can then access small pieces of the cached results at a time. Issues with this
include:
- If the customer record is 1KB, then wide search criteria (i.e. last name =
S*) will cause 1 GB transfer from the database server to app server, and then
1GB being stored on the app server, cached, waiting for the user (each user!)
to ask for the next 10 or 100 records. This is inefficient use of network and
memory resources.
- 99% of the data transfered from the database server will not by used ... most
users flip through a couple of pages and then choose a record or start a new search
2) Requery the database each time and ask for a subset
I haven't seen this formalized into a pattern yet, but the basic idea is this:
If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
records from the db. If the user asks for the next page, requery the database
and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
The query is re-performed, causing the Oracle server to do another costly "execute"
(bad on 5M records with sorting).
To solve this, I've beed trying to enhance the second strategy above by caching
the ResultSet object in a stateful session bean. Unfortunately, this causes a
"ResultSet already closed" SQLException, although I ensure that the Connection,
PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
seen this on newsgroups ... it appears that WebLogic is forcing the Connection
closed. If this is how J2EE and pooled connections work, then that's fine ...
there's nothing I can really do about it.
Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
it yet, but it wouldn't be a great solution as it would be using Oracle-specific
functionality (we are trying to be db-agnostic).
More information:
- BEA WebLogic Server 8.1
- JDBC: Oracle's thin driver provided with WLS 8.1
- Platform: Sun Solaris 5.8
- Oracle 9i
Any other ideas on how I can solve this issue? Hi. Fancy SQL to the rescue! If the table has a unique key, you can simply send a
query per page, with iterative SQL that selects the next N rows beyond what was
selected last time. Eg:
Let variable X be the highest key value you've seen so far. Initially it would
be the lowest possible value.
select * from mytable M
where ... -- application-specific qualifications...
and M.key >= X
and (100 <= select count(*) from mytable MM where MM.key > X and MM.key < M.key and ...)
In English, this says, select all the qualifying rows higher than what I last saw, but
only those that have fewer than 100 qualifying rows between the last I saw and them (ie:
the next 100).
When processing this query, remember the highest key value you see, and use it for the
next query.
Joe -
I am wondering if anyone can point me to a reliable source of information that shows how large scale software companies perform business when it comes to Software design specifications?
Do they do a full design? Partial? No designs at all?
Do the companies do a high and low level design, or split it into different phases/iterations?
Do they use a proprietary format (Text + UML)? Straight UML? Text Only?
Does anyone know of a source of information which describes these sort of informations?
ThanksMost will have a multitude of "standards" and"processes" in use in different departments and for
different projects.
I agree with you, but is there information out there
which points to what large companies tend to do during
the design phase? On large scale projects (10,000+
function points), it would be nearly impossible to
approach this task without dividing and conquering
(refer to Dr. Jenkings Extreme Software Cost
Estimation, which says that an individual task of
200,000+ lines of code can never be completed, or at
least has never been done).
Large scale projects exist without formal design. They usually arrive at that over time by incremental addition.
So, there must design approaches used, otherwise
these companies would never be finishing their
projects. So I am trying to find a source that I can
cite with information on specifically what design
process, and modeling is being used.Just because there are no formal designs or the formal designs are not up to date does not mean there are no designs.
The problem is not that designs do not exist, but rather that there is no way to communicate those designs to others. Formal, up to date designs, solve that problem. -
Typical/Common large-scale ACE deployment or designs?
I am deploying several ACE devices and GSS devices to facilitate redundancy and site load balancing at a couple of data centers. Now that I have a bunch of experience with the ACE and GSS, are there typical or common ACE deployment methods? Are there reference designs? I have been looking, and haven't really found any.
Even if they are not Cisco 'official' methods, I'm wondering how most people, particularily those who deploy a lot of these or deploy them with large-scale systems, typically do it.
I'm using routed mode (not one-arm mode) and I'm wondering if most people use real server (in my case, web servers) with dual-NICs to support connectivity to back-end systems? Or do people commonly just route it all through the ACE?
Also, how many VIPs and real servers have been deployed in a single ACE 4710 device? I'm trying to deploy about 700 VIPs with about 1800 Real Servers providing content to those VIPs.
How do people configure VIPs, farms, and sticky? I'm looking for how someone who wants to put a large ammount of VIPs and real servers into the ACE would succeed at doing it. I have attempted to add a large number in the 'global' policy-map, but that uses too many PANs.
I have tried a few methods myself, and have run into the limit on Policy Action Nodes (PANs) in the ACE device. Has anyone else hit this issue? Any tips or tricks on how to use PANs more conservitively?
Any insight you can share would be appreciated.
- ErikAs far as i can see from your requirements i suggest you create 1 ear file for your portal and 1 ear file per module.
The ear file from your portal is the main application and the ear files of your modules are shared libraries that contain the taskflows. These taskflows can be consumed in the portal application.
This way, you can easily deploy 1 module without needing to deploy the main application or the other application.
It also let you devide your team of developer so everybody can work on a sepperate module without interfering.
On a sidenote: when you have deployed your main application, and later you create a new module, than you have to register that module to your application so then you will need to redeploy your portal but if you update an existing module, you won't need to redeploy your portal.
As for the security, all your modules will inherit the security model of your portal application. -
Large scale forte implementation
Dear Forte experts:
I am part of a team in an large insurance company in charge of developing an
enterprise-wide insurnace solution. We have been approached by few vendors, one
of which bases its architecture on Forte. We really like what we saw, however,
neither the vendor nor us are confident nor knowledgable enough about the
performance behaviour of Forte in a distributed computing environment which can
be characterized as, multiple islands of processing, with
millions-of-transactions/day, spread across wide area network.
I am afraid we won't be able to choose Forte route unless we gain confidence on
its performance capability in our typical environment. So any insight, examples,
case studies that I can get from this group collective knowledge is extremly
helpful and is greatly appreciated.
Sincerely,
Farhad Abar, Ph.D.From: Inman, Kal
Sent: Thursday, June 12, 1997 7:07 AM
To: [email protected]
Subject: RE: large scale forte implementation
Farhad
At Andersen Windows, we have been running our Order Entry system over
a 56K frame relay network since the systems initial deployment in Nov
of 1994. We currently have a user base of approximately 120 PC & Mac
clients running over the frame, with an additional internal installed
base of approximately 50 PC & Mac workstations. This system runs on a
single Sequent server. We soon hope to add NT clients to this mix.
It has been our observation that Forte has not been a constraint to
performance. When we have performance problems, it has generally been
caused by poor design. One of are largest constraints to performance
is the amount of data we drag across the network.
Since the successful implementation of our Order Entry system,
Andersen has adopted Forte as our enterprise custom development tool.
It has allowed our development staff to concentrated on development of
business functionality while insulating us from the complexities of
operating systems, messaging, and maintaining platform specific code.
We currently have several additional systems deployed using Forte.
These systems include three Express applications, a standalone windows
application and a mobile client application.
I think Thomas Mercer Hursh asked a valid question "What are the
alternatives you are considering?". I don't think you will find one
to compare with Forte.
Kal Inman
Andersen Windows
From: [email protected][SMTP:[email protected]]
Sent: Wednesday, June 11, 1997 10:07 AM
To: [email protected]
Subject: large scale forte implementation
Dear Forte experts:
I am part of a team in an large insurance company in charge of
developing an
enterprise-wide insurnace solution. We have been approached by few
vendors, one
of which bases its architecture on Forte. We really like what we saw,
however,
neither the vendor nor us are confident nor knowledgable enough about
the
performance behaviour of Forte in a distributed computing environment
which can
be characterized as, multiple islands of processing, with
millions-of-transactions/day, spread across wide area network.
I am afraid we won't be able to choose Forte route unless we gain
confidence on
its performance capability in our typical environment. So any insight,
examples,
case studies that I can get from this group collective knowledge is
extremly
helpful and is greatly appreciated.
Sincerely,
Farhad Abar, Ph.D. -
How to create a single large bitmap data at run time?
Hi All,
Please help me in overcoming the issue that is mentioned below.
Requirement: Create single very large bitmap data which contain some 30 PNG images loaded and for each image it should have some text. Images and text are loaded dynamically (AS2 code. Images are stored in a remote server). The bitmap data display should show 8 images at a time and corresponding text. We can see rest content by scrolling (kinetic scroll is implemented). How can I go for it?
Some questions:
· Is there any limit for size of bitmap data ( As per link there is restriction on the height of the bitmap we can create in AS2 (max value is 2880 which is not enough for some 30 element list http://help.adobe.com/en_US/FlashLite/2.0_FlashLiteAPIReference2/WS84235ED5-9394-4a52-A098 -EED216C18A66.html ) How to overcome this limitation?
· If we create individual bitmap data for 30 individual PNG files, we find some jerks in scroll. How can we have smooth scrolling?
Thanks and Regards,
ManjunathThank you very much for the reply.
The number of bitmaps are not 30 always. It can vary in real time and since all PNG files are stored in a remote server which are also vary during runtime; so we can not predefine the number of child movieclips. It could be 10 at some duration and may be 20 at some other time.
Any help? -
Hi, My printing has suddenly changed in adobe to a large scale, as in, what should be one page of print comes out as 24 pages? I havent changed anything, its happening on more than one document also, I have to stop my printer before all the pages spew out. I have tried printing 'one single page' and it does exactly the same? Help?
Is the Poster Print feature turned ON?
-
Large OLTP data set to get through the cache in our new ZS3-2 storage.
We recently purchased a ZS3-2 and are currently attempting to do performance testing. We are using various tools to simulate load within our Oracle VM 3.3.1 cluster of qty5 Dell m620 servers-- swingbench, vdbench, and dd. The OVM repositories are connecting via NFS. The Swingbench load testing servers have a base OS disk mounted from the repos and NFS mounts via NFS v4 from within the VM (we would also like to test dNFS later in our testing).
The problem I'm trying to get around is that the 256G of DRAM (and a portion of that for ARC) is large enough where my reads are not touching the 7200 RPM disks. I'd like to create a large enough data set so the amount of random reads cannot possible be stored within the ARC cache (NOTE: we have no L2ARC at the moment).
I've run something similar to this in the past, but have adjusted the "sizes=" to be larger than 50m. My thought here is that, if the ARC is up towards around 200 or so MB's, if I create the following on four separate VM's and run vdbench at just about the same time, it will be attempting to read more data than can possibly fit in the cache.
* 100% random, 70% read file I/O test.
hd=default
fsd=default,files=16,depth=2,width=3,sizes=(500m,30,1g,70)
fsd=fsd1,anchor=/vm1_nfs
fwd=fwd1,fsd=fsd*,fileio=random,xfersizes=4k,rdpct=70,threads=8
fwd=fwd2,fsd=fsd*,fileio=random,xfersizes=8k,rdpct=70,threads=8
fwd=fwd3,fsd=fsd*,fileio=random,xfersizes=16k,rdpct=70,threads=8
fwd=fwd4,fsd=fsd*,fileio=random,xfersizes=32k,rdpct=70,threads=8
fwd=fwd5,fsd=fsd*,fileio=random,xfersizes=64k,rdpct=70,threads=8
fwd=fwd6,fsd=fsd*,fileio=random,xfersizes=128k,rdpct=70,threads=8
fwd=fwd7,fsd=fsd*,fileio=random,xfersizes=256k,rdpct=70,threads=8
rd=rd1,fwd=fwd1,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
rd=rd2,fwd=fwd2,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
rd=rd3,fwd=fwd3,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
However, the problem I keep running into is that vdbench's java processes will throw exceptions
... <cut most of these stats. But suffice it to say that there were 4k, 8k, and 16k runs that happened before this...>
14:11:43.125 29 4915.3 1.58 10.4 10.0 69.9 3435.9 2.24 1479.4 0.07 53.69 23.12 76.80 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 7.36 0.1 627.2 0.0 0.00 0.0 0.00 0.0 0.00
14:12:13.071 30 4117.8 1.88 10.0 9.66 69.8 2875.1 2.65 1242.7 0.11 44.92 19.42 64.34 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 12.96 0.1 989.1 0.0 0.00 0.0 0.00 0.0 0.00
14:12:13.075 avg_2-30 5197.6 1.52 9.3 9.03 70.0 3637.8 2.14 1559.8 0.07 56.84 24.37 81.21 16383 0.0 0.00 0.0 0.00 0.0 0.00 0.1 6.76 0.1 731.4 0.0 0.00 0.0 0.00 0.0 0.00
14:12:15.388
14:12:15.388 Miscellaneous statistics:
14:12:15.388 (These statistics do not include activity between the last reported interval and shutdown.)
14:12:15.388 WRITE_OPENS Files opened for write activity: 89 0/sec
14:12:15.388 FILE_CLOSES Close requests: 81 0/sec
14:12:15.388
14:12:16.116 Vdbench execution completed successfully. Output directory: /oracle/zfs_tests/vdbench/output
java.lang.RuntimeException: Requested parameter file does not exist: param_file
at Vdb.common.failure(common.java:306)
at Vdb.Vdb_scan.parm_error(Vdb_scan.java:50)
at Vdb.Vdb_scan.Vdb_scan_read(Vdb_scan.java:67)
at Vdb.Vdbmain.main(Vdbmain.java:550)
So I know from reading other posts, that vdbench will do what you tell it (Henk brought that up). But based on this, I can't tell what I should do differently to the vdbench file to get around this error. Does anyone have advice for me?
Thanks,
Joeah... it's almost always the second set of eyes. Yes, it is run from a script. And I just looked and realized that the list last line didn't have the \# in it. Here's the line:
"Proceed to the "Test Setup" section, but do something like `while true; do ./vdbench -f param_file; done` so the tests just keep repeating."
I just added the hash to comment that out and am rerunning my script. My guess is that it'll complete Thanks Henk. -
In OSB , xquery issue with large volume data
Hi ,
I am facing one problem in xquery transformation in OSB.
There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
How to check what is exactly causing the issue here, why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
My Xquery is something like below.
<InputParameters>
for $choice in $inputParameters1/choice
let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])
let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
return
<choice>
if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
let $claimID:= $withSamePrimaryID[1]/ClaimID
return
<ClaimID>{$claimID}</ClaimID>
else
<ClaimID>{ data($choice/ClaimID) }</ClaimID>HI ,
I understand your use case is
a) read the file ( from ftp location.. txt file hopefully)
b) process the file ( your x query .. although will not get into details)
c) what to do with the file ( send it backend system via Business Service?)
Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
Can say that is expected behaviour.
the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
if no error handlers - look at the timeout and error condition scenarios on your service.
HTH -
when printing from an online PDF, the page prints in extra large scale. How do I fix this?
This can happen when Firefox has misread the paper size from the information supplied by Windows. Clearing it can involve finding some obscure settings, but here goes:
(1) In a new tab, type or paste '''about:config''' in the address bar and press Enter. Click the button promising to be careful.
(2) In the search box above the list, type or paste '''print''' and pause while the list is filtered
(3) For each setting for the problem printer, right-click and Reset it. The fastest way is to right-click with the mouse and then press the r key on the keyboard with your other hand.
Note: In a couple other threads involving Brother printers, the preference '''printer_printer_name.print_paper_data''' was set to 256 and when the user edited it to 1 that fixed the paper size problem. If you see a 256 there, you can edit the value by doubling-clicking it or using right-click>Modify. -
Line appears when applying drop shadow on large scale
Hello!
Some weeks ago I had to make a large scale graphic (800mmx2000mm) for a roll-up banner. I wanted to apply a drop shadow to a rounded shape, and ugly lines came up. Since it was a bit urgent, I decided not to use it.
But now I'm curious, so I quickly made an ellipse and added a shadow, so you know what I mean. This also happens when I save it as pdf or image.
Perhaps someday I will have to use a drop shadow on large scale. So, does anybody knows how to fix this or what could I do in case I need to use this effect in these conditions? I use Illustrator CS6 in Mac with Mavericks.
Thanks in advance.Mike Gondek wrote:
I was able to create an ellipse to your dimension and got a good drop shadow. What happens if you manually make a drop shadow using appearance?
FYI I tried making the same using drop shadow filter in CS5 and got this error.
Incase your file was created in CS5 and opened in 6, I would recreate the drop shadow in CS6. I know they redid the gaussian blur in CS6, but not sure if that affected drop shadow.
CS6 is better on raster effects at large sizes.
My file was created and opened in CS6.
My ellipse is around 175cmx50cm. I tried it manually like you said and got the same results:
So, I guess I'm alone with such a problem. No idea what is wrong :/ -
SELECT records larger than date specified in sub query
Dear All
Thank you for your attention.
I would like to select records larger than date specified in sub query
query should be something like the following
SELECT my_order_number, my_date, my_task
FROM MYTB
WHERE my_order_number IN order_no AND my_date > date (SELECT order_no, date FROM MySubQueryResult)
(it is incorrect)
Sub query result:
order_no | date
A1 | 2014-12-21 09:06:00
A2 | 2014-12-20 09:07:00
A3 | 2014-12-20 08:53:00
A4 | 2014-12-20 08:57:00
MYTB:
my_order_number | my_task | my_date
A1 | T1 | 2014-12-21 09:06:00
A1 | T2 | 2014-12-22 10:01:00
A2 | T1 | 2014-12-20 09:07:00
A3 | T2 | 2014-12-20 08:53:00
A3 | T4 | 2014-12-21 09:30:00
A3 | T8 | 2014-12-23 20:32:00
A4 | T6 | 2014-12-20 08:57:00
expected result:
my_order_number | my_task | my_date
A1 | T2 | 2014-12-22 10:01:00
A3 | T4 | 2014-12-21 09:30:00
A3 | T8 | 2014-12-23 20:32:00
Any ideas? Thanks.
swivanHi,
try this
SELECT my_order_number, my_date, my_task
FROM MYTB
WHERE my_order_number IN (SELECT order_no FROM MySubQueryResult)
AND my_date > (SELECT date FROM MySubQueryResult)
Alternatively, you can also make use of joins to achieve the same.
Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
Praveen Dsa | MCITP - Database Administrator 2008 |
My Blog | My Page
Dear Praveen Dsa
Thanks for your reply, but order_no and date are paired and related, cannot separate.
each order have its own date, so it is not working
Best Regards
swivan -
Tweaking product prices on a large scale - how?
My Client has a software store on BC. His supplier is constantly changing their prices and my client wants to be able to quickly review prices, make changes to reflect supplier prices every few days
If I export the Product List the Excel export is unusable with it full of HTML markup from the product descriptions.
Apart from opening each product individually to check and tweak prices how is everyone ammending prices on a large scale.... My client only has 60 products at the moment but this is soon to quadruple and I have prospective clients looking at BC for their ecommerce solution and they have thousands of items.
Regards
RichardIf its just prices you want to input, see if you can just eliminate all the other columns that are not needed and only import the price column with its product identifier ofcourse, and see if it will just update the price and not have to deal with the descriptions...Just a thought...
Maybe you are looking for
-
Error while running Profile synchronization (Event ID: 6398)
Hi, We had our User Profile Synchronization service in stopped state for quite some time due to our SQL server having "Named Instance". Since RTM version had this compatibility issue, I patched my farm with SP2 + FEB 2014 CU. Now the "User Profile Sy
-
In Boot Camp assistant no network connection
My network works fine, but error message appears "Can't download Windows Support Software because of a network problem." I am trying to use Boot Camp to partition my MacBook into a Windows PC. Also download of Windows Support Software is very slow.
-
Hi All, I have recorded a Load script with 5 classes(5 check boxes) and i press select all button,so if script is having 5 classes it will pass and if it is having less than 5 classes it is failing. http.solveXPath("web.input.selected",".//INPUT[@nam
-
Monitoring Free memory in solaris 8
Hi, I will like to monitor the amount of free memory in my system. I understand that as Solaris 8 implements "cyclical page cache", the value read from the "free list" column of vmstat will be much more accurate than previous version. But when i run
-
REM Back flush using firmed planned orders
Dear Gurus, We are using REM manufacturing process in SAP , we are doing backflush for stage wise confirmation and final confirmation.while doing this we are not referring any planned orders as reference. if we update new part numbers in BOM the