Help with SQL Round
I need to round up my Percentage column in my query below and couldnt quite get the ROUND function right. Right now I am getting:
28.888888800
would like
28.89%
SELECT
a.doctor as Doctor
, a.financialclass AS FinancialClass
, COUNT( a.PatientVisitId ) / CAST( sub.the_count AS DECIMAL( 8 , 2 )) *100 AS Percentage
FROM
#temp a JOIN(
SELECT
doctor
, COUNT( patientvisitid )AS the_count
FROM #temp
GROUP BY
doctor )sub ON a.doctor = sub.doctor
GROUP BY
a.doctor
, a.FinancialClass
, sub.the_count
ORDER BY
Doctor , FinancialClass;
Hi Vishkey,
If you mean that you prefer to do it in the application that get the data from the SQL and not at the SQL query level, then I think it is not black and white. There are pros and cons, for example:
* If we have a lot of data that need to transfer from the SQL to the App I will tr to minimize it and sending the value 22.22 is much less then sending 22.22222222 for example.
* If we need to do some math calculation then I will prefer to pass the whole number to get more acurate. Since we can alway convert 22.22222222 into 22.22 but not vise versa.
* In genaral I agree that if this is a display problem that in most cases I will do it in the app level and not the SQL Query.
Since we are in the T-SQL forum I will go with Naomi solution this time :-)
In Asp.Net forum I would probably go with your solution :-)
[Personal Site] [Blog] [Facebook]
Similar Messages
-
Need help with SQL Query with Inline View + Group by
Hello Gurus,
I would really appreciate your time and effort regarding this query. I have the following data set.
Reference_No---Check_Number---Check_Date--------Description-------------------------------Invoice_Number----------Invoice_Type---Paid_Amount-----Vendor_Number
1234567----------11223-------------- 7/5/2008----------paid for cleaning----------------------44345563------------------I-----------------*20.00*-------------19
1234567----------11223--------------7/5/2008-----------Adjustment for bad quality---------44345563------------------A-----------------10.00------------19
7654321----------11223--------------7/5/2008-----------Adjustment from last billing cycle-----23543556-------------------A--------------------50.00--------------19
4653456----------11223--------------7/5/2008-----------paid for cleaning------------------------35654765--------------------I---------------------30.00-------------19
Please Ignore '----', added it for clarity
I am trying to write a query to aggregate paid_amount based on Reference_No, Check_Number, Payment_Date, Invoice_Number, Invoice_Type, Vendor_Number and display description with Invoice_type 'I' when there are multiple records with the same Reference_No, Check_Number, Payment_Date, Invoice_Number, Invoice_Type, Vendor_Number. When there are no multiple records I want to display the respective Description.
The query should return the following data set
Reference_No---Check_Number---Check_Date--------Description-------------------------------Invoice_Number----------Invoice_Type---Paid_Amount-----Vendor_Number
1234567----------11223-------------- 7/5/2008----------paid for cleaning----------------------44345563------------------I-----------------*10.00*------------19
7654321----------11223--------------7/5/2008-----------Adjustment from last billing cycle-----23543556-------------------A--------------------50.00--------------19
4653456----------11223--------------7/5/2008-----------paid for cleaning------------------------35654765-------------------I---------------------30.00--------------19
The following is my query. I am kind of lost.
select B.Description, A.sequence_id,A.check_date, A.check_number, A.invoice_number, A.amount, A.vendor_number
from (
select sequence_id,check_date, check_number, invoice_number, sum(paid_amount) amount, vendor_number
from INVOICE
group by sequence_id,check_date, check_number, invoice_number, vendor_number
) A, INVOICE B
where A.sequence_id = B.sequence_id
Thanks,
NickIt looks like it is a duplicate thread - correct me if i'm wrong in this case ->
Need help with SQL Query with Inline View + Group by
Regards.
Satyaki De. -
Hi everyone, can someone please help me with a function to use here please. I have a nine digit number e,g (000046650 and 024063870). I would like to convert this to 4 digits before the comma and 5 digits after the comma. So it must be something like this (0000,46650 and 0240,63870). I thought I could easily do this with a ROUND function, but it doesn't work. Could somebody please help by telling me what I'm missing?
Thank you all for your help.Hi,
I must be missing something, but why don't you just divide by 10000 ?Scott@my11g SQL>with d as (
2 select 000046650 n from dual
3 union all
4 select 024063870 from dual
5 )
6 select n/10000 from d ;
N/10000
4.665
2406.387You can then use a to_char with fmt to format it with zero padding :Scott@my11g SQL>with d as (
2 select 000046650 n from dual
3 union all
4 select 024063870 from dual
5 )
6 select to_char(n/10000,'0000.00000') nn from d ;
NN
0004.66500
2406.38700 -
Where to find help with SQL Developer installation?
Hi,
I just want to try out SQL Developer and compare its capabilities to TOAD's. Unfortunately, I am not PC software savvy and now am stuck with a SQL Developer (sqldeveloper-1.2.2998) installation problem. When I clicked on the .exe file, I got a blank SQL Developer screen - there is nothing in the screen except a heading that reads 'Oracle SQL Developer'...
Does anyone know of a blog or a site that I can get some help with problems like mine?
Any help is much appreciated!Hi,
SQL Developer forum link:
"http://forums.oracle.com/forums/forum.jspa?forumID=260"
There are 2 versions of SQL Developer, with/without JRE.
Try out the full install version with JRE.
HTH
Zack -
Need help with SQL*Loader not working
Hi all,
I am trying to run SQL*Loader on Oracle 10g UNIX platform (Red Hat Linux) with below command:
sqlldr userid='ldm/password' control=issue.ctl bad=issue.bad discard=issue.txt direct=true log=issue.log
And get below errors:
SQL*Loader-128: unable to begin a session
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Can anyone help me out with this problem that I am having with SQL*Loader? Thanks!
Ben PrusinskiHi Frank,
More progress, I exported the ORACLE_SID and tried again but now have new errors! We are trying to load an Excel CSV file into a new table on our Oracle 10g database. I created the new table in Oracle and loaded with SQL*Loader with below problems.
$ export ORACLE_SID=PROD
$ sqlldr 'ldm/password@PROD' control=prod.ctl log=issue.log bad=bad.log discard=discard.log
SQL*Loader: Release 10.2.0.1.0 - Production on Tue May 23 11:04:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
SQL*Loader: Release 10.2.0.1.0 - Production on Tue May 23 11:04:28 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: prod.ctl
Data File: prod.csv
Bad File: bad.log
Discard File: discard.log
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table TESTLD, loaded from every logical record.
Insert option in effect for this table: REPLACE
Column Name Position Len Term Encl Datatype
ISSUE_KEY FIRST * , CHARACTER
TIME_DIM_KEY NEXT * , CHARACTER
PRODUCT_CATEGORY_KEY NEXT * , CHARACTER
PRODUCT_KEY NEXT * , CHARACTER
SALES_CHANNEL_DIM_KEY NEXT * , CHARACTER
TIME_OF_DAY_DIM_KEY NEXT * , CHARACTER
ACCOUNT_DIM_KEY NEXT * , CHARACTER
ESN_KEY NEXT * , CHARACTER
DISCOUNT_DIM_KEY NEXT * , CHARACTER
INVOICE_NUMBER NEXT * , CHARACTER
ISSUE_QTY NEXT * , CHARACTER
GROSS_PRICE NEXT * , CHARACTER
DISCOUNT_AMT NEXT * , CHARACTER
NET_PRICE NEXT * , CHARACTER
COST NEXT * , CHARACTER
SALES_GEOGRAPHY_DIM_KEY NEXT * , CHARACTER
value used for ROWS parameter changed from 64 to 62
Record 1: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 2: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 3: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 4: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 5: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 6: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 7: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 8: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 9: Rejected - Error on table ISSUE_FACT_TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 10: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 11: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 12: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 13: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 14: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 15: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 16: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 17: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 18: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 19: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 20: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 21: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 22: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 23: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 24: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
Record 39: Rejected - Error on table TESTLD, column DISCOUNT_AMT.
Column not found before end of logical record (use TRAILING NULLCOLS)
MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
Table TESTLD:
0 Rows successfully loaded.
51 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 255936 bytes(62 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 51
Total logical records rejected: 51
Total logical records discarded: 0
Run began on Tue May 23 11:04:28 2006
Run ended on Tue May 23 11:04:28 2006
Elapsed time was: 00:00:00.14
CPU time was: 00:00:00.01
[oracle@casanbdb11 sql_loader]$
Here is the control file:
LOAD DATA
INFILE issue_fact.csv
REPLACE
INTO TABLE TESTLD
FIELDS TERMINATED BY ','
ISSUE_KEY,
TIME_DIM_KEY,
PRODUCT_CATEGORY_KEY,
PRODUCT_KEY,
SALES_CHANNEL_DIM_KEY,
TIME_OF_DAY_DIM_KEY,
ACCOUNT_DIM_KEY,
ESN_KEY,
DISCOUNT_DIM_KEY,
INVOICE_NUMBER,
ISSUE_QTY,
GROSS_PRICE,
DISCOUNT_AMT,
NET_PRICE,
COST,
SALES_GEOGRAPHY_DIM_KEY
) -
Please help with SQL amount calulation
-- Results
with t as (
select 'P11877' Mstr_Program, 1 Year_of_study, 'BUSI1490' program_module, 20 no_of_stud, 1 rank, 30 program_credits, 30 cumm_credits from dual union all
select 'P11877', 1, 'COMP1365', 20, 2, 30, 60 from dual union all
select 'P11877', 1, 'BUSI1375', 20, 3, 30, 90 from dual union all
select 'P11877', 1, 'COMP1363', 20, 4, 30, 120 from dual union all
select 'P11877', 2, 'MARK1174', 8, 1, 30, 30 from dual union all
select 'P11877', 2, 'FINA1068', 8, 2, 15, 45 from dual union all
select 'P11877', 2, 'INDU1062', 8, 3, 30, 75 from dual union all
select 'P11877', 2, 'BUSI1329', 8, 4, 15, 90 from dual union all
select 'P11877', 2, 'MARK1138', 8, 5, 30, 120 from dual)
select * from t;-- Each MSTR_PROGRAM can have 1 or many program_module
-- MSTR_PROGRAM's can run for 1 or 2 years (case above is two years) so some modules run in year 1 and some in year 2
-- NO_OF_STUD is the number of students on the module
-- RANK basically ranks the modules by the number of students on them grouped by program and year
-- e.g.row_number() OVER (PARTITION BY Mstr_Program, Year_of_study) ORDER BY COUNT(STUDENT_ID) DESC) rank
-- PROGRAM_CREDITS: each module has a fixed number of credits
-- CUMM_CREDITS: Increments the credit count of modules
-- SUM(program_credits * 10) OVER (PARTITION BY Mstr_Program, Year_of_study
-- ORDER BY count(STUDENT_ID) desc ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cumm_credits
-- I want to trim of any modules once the CUM_CREDITS hits 120. As seen above. I achieve this by wrapping the main query is another SELECT then LIMIT
-- that WHERE cum_credit <=120.
-- But what I need is:
-- In some cases the the cumm_credit maybe on lets say 90credits then the next module is worth 40 credits. This next module will not show as it
-- will be greater than 120 credits, so i need to pro-rata it:
-- So if credit_count > 120, then the last module is counted pro-rata as follows: 1- ((credit count - 120) / credits from last module
-- Can anyone help with how I can incorporate this into my current code: The SELECT portion of the Original SQL is below: I simplified column names
-- e.t.c in the above so they wont be the same
SELECT * FROM (
SELECT
,SR_PROGRAM Mstr_Program
,DECODE (SORLCUR_YEAR, 1, 1,
2, 2,
3, 3,
4, 3, SR_YEAR) year_of_study
,SCT_SUBJ_CODE||SCT_CRSE_NUMB program_module
,COUNT(student_ID) no_of_stud
,row_number() OVER (PARTITION BY sr_program,
DECODE (sr_year, 1, 1,
2, 2,
3, 3,
4, 3, SR_YEAR) ORDER BY COUNT(student_id) DESC, scbcrse_title asc) rank
,(SCT_CREDIT_HRS * 10) program_credits
,SUM(SCT_CREDIT_HRS * 10) OVER (PARTITION BY sr_program, DECODE (sorlcur_year, 1, 1,
2, 2,
3, 3,
4, 3, SR_YEAR)
ORDER BY count(student_id) desc ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cumm_credits
WHERE cumm_credit <=120
ORDER BY Mstr_Program, YEAR_OF_STUDY, RANK asc;Maybe
SELECT Mstr_Program,year_of_study,program_module,no_of_stud,rank,program_credits old_program_credits,cumm_credits old_cumm_credits,
case when cumm_credits > 120
then program_credits - cumm_credits + 120
else program_credits
end new_program_credits,
case when cumm_credits > 120
then 120
else cumm_credits
end new_cumm_credits
FROM (SELECT SR_PROGRAM Mstr_Program,
DECODE(SORLCUR_YEAR,1,1,2,2,3,3,4,3,SR_YEAR) year_of_study,
SCT_SUBJ_CODE||SCT_CRSE_NUMB program_module,
COUNT(student_ID) no_of_stud,
row_number() OVER (PARTITION BY sr_program,DECODE(sr_year,1,1,2,2,3,3,4,3,SR_YEAR)
ORDER BY COUNT(student_id) DESC,scbcrse_title) rank,
10 * SCT_CREDIT_HRS program_credits,
10 * SUM(SCT_CREDIT_HRS) OVER (PARTITION BY sr_program,DECODE(sorlcur_year,1,1,2,2,3,3,4,3,SR_YEAR)
ORDER BY count(student_id) desc
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cumm_credits
WHERE 0 <= case when cumm_credits > 120
then program_credits - cumm_credits + 120
else program_credits
end
ORDER BY Mstr_Program,YEAR_OF_STUDY,RANKRegards
Etbin
Edited by: Etbin on 16.12.2011 8:50
with
t as /* simulating the result achieved */
(select 'P11877' Mstr_Program,1 Year_of_study, 'BUSI1490' program_module,20 no_of_stud,1 rank,30 program_credits,30 cumm_credits from dual union all
select 'P11877', 1, 'COMP1365', 20, 2, 40, 70 from dual union all
select 'P11877', 1, 'BUSI1375', 20, 3, 30, 100 from dual union all
select 'P11877', 1, 'COMP1363', 20, 4, 40, 140 from dual union all
select 'P11877', 2, 'MARK1174', 8, 1, 30, 30 from dual union all
select 'P11877', 2, 'FINA1068', 8, 2, 50, 80 from dual union all
select 'P11877', 2, 'INDU1062', 8, 3, 30, 110 from dual union all
select 'P11877', 2, 'BUSI1329', 8, 4, 50, 160 from dual union all
select 'P11877', 2, 'MARK1138', 8, 5, 30, 190 from dual
select Mstr_Program,Year_of_study,program_module,no_of_stud,rank,program_credits old_credits,cumm_credits old_cumm,
case when cumm_credits > 120
then program_credits - cumm_credits + 120
else program_credits
end new_program_credits,
case when cumm_credits > 120
then 120
else cumm_credits
end new_cumm_credits
from t
where 0 <= case when cumm_credits > 120
then program_credits - cumm_credits + 120
else program_credits
end -
Where clause "where 1=1" help with SQL tuning
Hello Tuning experts,
Is it helpful to use "where 1=1" and then put all the joins and conditions in the "AND" statements of the SQL when writing SQL queries. I would like to know if it helps with query performance to write SQL queirs that way.
Thanks in advance.
Edited by: oracle_developer on Oct 8, 2012 10:41 AMYou can see here that "where 1 = 1" is gone from the predicate info in the explain plan.
The optimizer simply discarded it.
SQL> explain plan for
2 select *
3 from emp
4 where 1 = 1
5 and job = 'SALESMAN';
Explained.
PLAN_TABLE_OUTPUT
Plan hash value: 3956160932
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 114 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 3 | 114 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("JOB"='SALESMAN')
13 rows selected. -
Help with SQL Server 2005 http Endpoint
I am trying to use mx:webservice to directly connect to a SQL
Server 2005 HTTP Endpoint. Is this possible. Is there going to be a
problem with crossdomain issues? If the Endpoint is actively
listening on port 80 then IIS cannot. So I cannot place
crossdomain.xml in webserver, how will I overcome this crossdomain
problem? Am I making this more complicated than it is? If anyone
has an example it would be appreciated. All I want is a flex2 app
talking directly to sql server. Seems possible.Kent, I see that many others have reported that error (doing
a google search), but I see no ready answers. I saw something that
reminded me of a connection string value that I've seen answer some
problems. May be worth a shot for you: try adding this string to
the connection string (in "advanced options") for your datasource:
AuthenticationMethod=Type2
If it doesn't solve it, remove it. But keep it handy in case
it ever may help with some other problem.
Here's one other possible answer for you:
http://www.webmasterkb.com/Uwe/Forum.aspx/coldfusion-server/3206/SQL-Server-2000-Windows-A uth
Sorry I can't be more clear for you. -
Hi All,
I have a problem in the query below. When I run the query I got a pop-up screen to ente value for
:total_balance,
:emp_code,
:from_date,
:to_date
total_balance supose to be a result of a calculation.
Your assistance is apreciated. Thanks,
Ribhi
select FK_VOUCHERSERIAL_N,
FK_VOUCHERVALUE_DA,
DESCRIPTION,
nvl(AMOUNT,0) amount,
TYPE,
Accnt101.postive_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) postive_amount,
Accnt101.negative_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) negative_amount,
Accnt101.total_balanceformula(:total_balance, EMPLOYEE_TRANSACTI.TYPE,Accnt101.negative_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) ,Accnt101.postive_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) , nvl(AMOUNT,0)) total_balance
from EMPLOYEE_TRANSACTI
where FK_EMPLOYEENUMBER0=:emp_code
and STATUS=1
and FK_VOUCHERVALUE_DA<=:to_date
and FK_VOUCHERVALUE_DA>=:from_date
and ((TYPE >7 and TYPE <16)
or (TYPE >34 and TYPE <43)
or (TYPE =7)
or (TYPE =18)
or (TYPE >26 and TYPE <35)
or (TYPE =17)
OR (TYPE = 60)
OR (TYPE = 70)
OR (TYPE = 72)
OR (TYPE = 73)
OR (TYPE = 74)
or (type = 21)
or (type =24)
or (type = 81)
or (type = 82))
order by FK_VOUCHERVALUE_DA asc, FK_VOUCHERSERIAL_N asc, type descHi Satyaki,
My problem is with SQL and PL/SQL codd. I managed to convert some of my reports and now I'm facing a problem with converted SQL and PL/SQL code. To give you an Idea the following is a sample of a converted report.
Pls have a look. (p.s how can i post formated text)
Thanks,
Ribhi
1 - XML template file
<?xml version="1.0" encoding="UTF-8" ?>
- <dataTemplate name="Accnt101" defaultPackage="Accnt101" version="1.0">
- <properties>
<property name="xml_tag_case" value="upper" />
</properties>
- <parameters>
<parameter name="FROM_DATE" dataType="date" defaultValue="01/01/1998" />
<parameter name="TO_DATE" dataType="date" defaultValue="31/12/1998" />
<parameter name="EMP_CODE" dataType="number" defaultValue="44" />
</parameters>
<lexicals />
- <dataQuery>
- <sqlStatement name="employee_trans">
- <![CDATA[
select FK_VOUCHERSERIAL_N,
FK_VOUCHERVALUE_DA,
DESCRIPTION,
nvl(AMOUNT,0) amount,
TYPE,
Accnt101.postive_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) postive_amount,
Accnt101.negative_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) negative_amount,
Accnt101.total_balanceformula(:total_balance, EMPLOYEE_TRANSACTI.TYPE,Accnt101.negative_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) ,Accnt101.postive_amountformula(EMPLOYEE_TRANSACTI.TYPE, nvl(AMOUNT,0)) , nvl(AMOUNT,0)) total_balance
from EMPLOYEE_TRANSACTI
where FK_EMPLOYEENUMBER0=:emp_code
and STATUS=1
and FK_VOUCHERVALUE_DA<=:to_date
and FK_VOUCHERVALUE_DA>=:from_date
and ((TYPE >7 and TYPE <16)
or (TYPE >34 and TYPE <43)
or (TYPE =7)
or (TYPE =18)
or (TYPE >26 and TYPE <35)
or (TYPE =17)
OR (TYPE = 60)
OR (TYPE = 70)
OR (TYPE = 72)
OR (TYPE = 73)
OR (TYPE = 74)
or (type = 21)
or (type =24)
or (type = 81)
or (type = 82))
order by FK_VOUCHERVALUE_DA asc, FK_VOUCHERSERIAL_N asc, type desc
]]>
</sqlStatement>
- <sqlStatement name="employee">
- <![CDATA[
select NAME,NUMBER0
from EMPLOYEE
where NUMBER0=:emp_code
]]>
</sqlStatement>
</dataQuery>
<dataTrigger name="beforeReportTrigger" source="Accnt101.beforereport" />
- <dataStructure>
- <group name="G_employee_trans" dataType="varchar2" source="employee_trans">
<element name="FK_VOUCHERSERIAL_N" dataType="number" value="FK_VOUCHERSERIAL_N" />
<element name="FK_VOUCHERVALUE_DA" dataType="date" value="FK_VOUCHERVALUE_DA" />
<element name="DESCRIPTION" dataType="varchar2" value="DESCRIPTION" />
<element name="AMOUNT" dataType="number" value="AMOUNT" />
<element name="postive_amount" dataType="number" value="postive_amount" />
<element name="negative_amount" dataType="number" value="negative_amount" />
<element name="total_balance" dataType="number" value="total_balance" />
<element name="TYPE" dataType="number" value="TYPE" />
<element name="CS_1" function="sum" dataType="number" value="G_employee_trans.total_balance" />
</group>
- <group name="G_employee" dataType="varchar2" source="employee">
<element name="NUMBER0" dataType="number" value="NUMBER0" />
<element name="NAME" dataType="varchar2" value="NAME" />
</group>
<element name="balance" dataType="number" value="Accnt101.balance_p" />
<element name="CS_2" function="count" dataType="number" value="G_employee.NUMBER0" />
<element name="CS_3" function="count" dataType="number" value="G_employee_trans.AMOUNT" />
</dataStructure>
</dataTemplate>
2 - PLS/SQL package
CREATE OR REPLACE PACKAGE Accnt101 AS
from_date date;
to_date date;
emp_code number;
balance number := 0 ;
function postive_amountformula(TYPE in number, amount in number) return number ;
function negative_amountformula(TYPE in number, amount in number) return number ;
function BeforeReport return boolean ;
function total_balanceformula(total_balance in number, TYPE in number, negative_amount in number, postive_amount in number, amount in number) return number ;
Function balance_p return number;
END Accnt101;
3- Package Body
CREATE OR REPLACE PACKAGE BODY Accnt101 AS
function postive_amountformula(TYPE in number, amount in number) return number is
begin
if ((TYPE>26 and TYPE<35)
or (TYPE=17))
then
return(amount);
elsif (type = 70)and (amount >=0) then
return (amount) ;
elsif (type = 72)and (amount >=0) then
return (amount) ;
elsif (type = 73)and (amount >=0) then
return (amount) ;
elsif (type = 74)and (amount >=0) then
return (amount) ;
elsif (type = 60)and (amount >=0) then
return (amount) ;
else
return (null) ;
end if;
RETURN NULL; end;
function negative_amountformula(TYPE in number, amount in number) return number is
begin
if ((TYPE>7 and TYPE<16)
or (TYPE >34 and TYPE <43)
or (TYPE=7)
or (TYPE=18)
or (type=21)
or (type=24)
or (type= 81)
or (type=82))
then
return(amount);
elsif (type = 70)and (amount <0) then
return (abs (amount)) ;
elsif (type = 72)and (amount <0) then
return (abs (amount)) ;
elsif (type = 73)and (amount <0) then
return (abs (amount)) ;
elsif (type = 74)and (amount <0) then
return (abs (amount)) ;
elsif (type = 60)and (amount <0) then
return (abs(amount)) ;
else
return (null) ;
end if;
RETURN NULL; end;
function BeforeReport return boolean is
var_pos number(15,3) ;
var_neg number(15,3) ;
beg_bal number(15,3) ;
Begin
begin
select sum (nvl(amount,0)) into beg_bal
from EMPLOYEE_TRANSACTI
where (TYPE=99 or type = 92 or type = 93 or type = 94)
and to_char(from_date,'YYYY')=to_char(date0,'YYYY')
and FK_EMPLOYEENUMBER0=emp_code;
EXCEPTION
WHEN NO_DATA_FOUND THEN
beg_bal := 0;
end;
begin
select sum(nvl(amount,0)) into var_pos
from EMPLOYEE_TRANSACTI
where
(TYPE=17
or type=60
OR TYPE=70
oR TYPE=72
OR TYPE=73
OR TYPE=74
or (TYPE>26 and TYPE<35))
and fk_vouchervalue_da<from_date
and fk_vouchervalue_da>= trunc(from_date,'year')
and FK_EMPLOYEENUMBER0=emp_code;
EXCEPTION
WHEN NO_DATA_FOUND THEN
var_pos := 0;
end;
Begin
select sum(nvl(amount,0)) into var_neg
from EMPLOYEE_TRANSACTI
where ((TYPE>7 and TYPE<16)
or (TYPE >34 and TYPE <43)
or (TYPE=7)
or (TYPE=18)
or (type=21)
or (type=24)
or (type= 81)
or (type=82) )
and fk_vouchervalue_da<from_date
and fk_vouchervalue_da>= trunc(from_date,'year')
and FK_EMPLOYEENUMBER0=emp_code;
balance :=nvl(beg_bal,0) + nvl(var_pos,0) - nvl(var_neg,0);
return(true);
EXCEPTION
WHEN NO_DATA_FOUND THEN
balance :=nvl(beg_bal,0) + nvl(var_pos,0) - nvl(var_neg,0);
RETURN (TRUE);
end;
RETURN NULL; end;
function total_balanceformula(total_balance in number, TYPE in number, negative_amount in number, postive_amount in number, amount in number) return number is
begin
if total_balance is null then
if ((TYPE>7 and TYPE<16)
or (TYPE >34 and TYPE <43)
or (TYPE=7)or (TYPE=18)
or (type=21) or (type=24)
or (type= 81)
or (type=82))
then
return(balance-negative_amount);
elsif ((TYPE>26 and TYPE<35) or (TYPE=17))
then
return(balance+postive_amount);
elsif (type=70 or type=72 or type=73 or type=74
or type=60) and (amount >=0) then
return(balance+postive_amount);
elsif (type=70 or type=72 or type=73 or type=74
or type=60) and (amount <0) then
return(balance-negative_amount);
end if;
else
if ((TYPE>7 and TYPE<16)
or (TYPE >34 and TYPE <43)
or (TYPE=7)or (TYPE=18)
or (type=21) or (type=24)
or (type= 81)
or (type=82))
then
return(total_balance-negative_amount);
elsif ((TYPE>26 and TYPE<35) or (TYPE=17))
then
return(total_balance+postive_amount);
elsif (type=70 or type=72 or type=73 or type=74
or type=60) and (amount >=0) then
return(total_balance+postive_amount);
elsif (type=70 or type=72 or type=73 or type=74
or type=60) and (amount <0) then
return(total_balance-negative_amount);
end if;
end if ;
RETURN NULL; end;
Functions to refer Oracle report placeholders
Function balance_p return number is
Begin
return balance;
END;
END Accnt101 ; -
I have the privilege of performing a very tedious task.
We have some home grown regular expressions in our company. I now need to expand these regular expressions.
Samples:
a = 0-3
b = Null, 0, 1
Expression: Meaning
1:5: 1,2,3,4,5
1a: 10, 11, 12, 13
1b: 1, 10, 11
1[2,3]ab: 120, 1200, 1201, ....
It get's even more inetersting because there is a possibility of 1[2,3]a.ab
I have created two base queries to aid me in my quest. I am using the SQL MODEL clause to solve this problem. I pretty confident that I should be able to convert evrything into a range and the use one of the MODEL clause listed below.
My only confusion is how do I INCREMENT dynamically. The INCREMENT seems to be a constant in both a FOR and ITERATE statement. I need to figure a way to increment with .01, .1, etc.
Any help will be greatly appreciated.
CODE:
Reference: http://www.sqlsnippets.com/en/topic-11663.html
Objective: Expand a range with ITERATE
WITH t AS
(SELECT '2:4' pt
FROM DUAL
UNION ALL
SELECT '6:9' pt
FROM DUAL)
SELECT pt AS code_expression
-- , KEY
-- , min_key
-- , max_key
, m_1 AS code
FROM t
MODEL
PARTITION BY (pt)
DIMENSION BY ( 0 AS KEY )
MEASURES (
0 AS m_1,
TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key,
TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key
RULES
-- UPSERT
ITERATE (100000) UNTIL ( ITERATION_NUMBER = max_key[0] - min_key[0] )
m_1[ITERATION_NUMBER] = min_key[0] + ITERATION_NUMBER
ORDER BY pt, m_1
Explanation:
Line numbers are based on the assupmtion that "WITH t AS" starts at line 5.
If you need detailed information regarding the MODEL clause please refer to
the Refrence site stated above or read some documentation.
Partition-
Line 18: PARTITION BY (pt)
This will make sure that each "KEY" will start at 0 for each value of pt.
Dimension-
Line 19: DIMENSION BY ( 0 AS KEY )
This is necessary for the refrences max_key[0], and min_key[0] to work.
Measures-
Line 21: 0 AS m_1
A space holder for new values.
Line 22: TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key
The result is '1' for '1:5'.
Line 23: TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key
The result is '5' for '1:5'.
Rules-
Line 26: UPSERT
This makes it possible for new rows to be created.
Line 27: ITERATE (100000) UNTIL ( ITERATION_NUMBER = max_key[0] - min_key[0] )
This reads ITERATE 100000 times or UNTIL the ITERATION_NUMBER = max_key[0] - min_key[0]
which would be 4 for '1:5', but since the ITERATION_NUMBER starts at 0, whatever follows
is repaeted 5 times.
Line 29: m_1[ITERATION_NUMBER] = min_key[0] + ITERATION_NUMBER
m_1[ITERATION_NUMBER] means m_1[Value of Dimension KEY].
Thus for each row of KEY the m_1 is min_key[0] + ITERATION_NUMBER.
Reference: http://www.sqlsnippets.com/en/topic-11663.html
Objective: Expand a range using FOR
WITH t AS
(SELECT '2:4' pt
FROM DUAL
UNION ALL
SELECT '6:9' pt
FROM DUAL)
, base AS
SELECT pt AS code_expression
, KEY AS code
, min_key
, max_key
, my_increment
, m_1
FROM t
MODEL
PARTITION BY (pt)
DIMENSION BY ( CAST(0 AS NUMBER) AS KEY )
MEASURES (
CAST(NULL AS CHAR) AS m_1,
TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key,
TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key,
.1 AS my_increment
RULES
-- UPSERT
m_1[FOR KEY FROM min_key[0] TO max_key[0] INCREMENT 1] = 'Y'
ORDER BY pt, KEY, m_1
SELECT code_expression, code
FROM base
WHERE m_1 = 'Y'
Explanation:
Line numbers are based on the assupmtion that "WITH t AS" starts at line 5.
If you need detailed information regarding the MODEL clause please refer to
the Refrence site stated above or read some documentation.
Partition-
Line 21: PARTITION BY (pt)
This will make sure that each "KEY" will start at 0 for each value of pt.
Dimension-
Line 22: DIMENSION BY ( 0 AS KEY )
This is necessary for the refrences max_key[0], and min_key[0] to work.
Measures-
Line 24: CAST(NULL AS CHAR) AS m_1
A space holder for results.
Line 25: TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key
The result is '1' for '1:5'.
Line 26: TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key
The result is '5' for '1:5'.
Line 27: .1 AS my_increment
The INCREMENT I would like to use.
Rules-
Line 30: UPSERT
This makes it possible for new rows to be created.
However seems like it is not necessary.
Line 32: m_1[FOR KEY FROM min_key[0] TO max_key[0] INCREMENT 1] = 'Y'
Where the KE value is between min_key[0] and max_key[0] set the value of m_1 to 'Y'
*/Of course, you can accomplish the same thing without MODEL using an Integer Series Generator like this.
create table t ( min_val number, max_val number, increment_size number );
insert into t values ( 2, 3, 0.1 );
insert into t values ( 1.02, 1.08, 0.02 );
commit;
create table integer_table as
select rownum - 1 as n from all_objects where rownum <= 100 ;
select
min_val ,
increment_size ,
min_val + (increment_size * n) as val
from t, integer_table
where
n between 0 and ((max_val - min_val)/increment_size)
order by 3
MIN_VAL INCREMENT_SIZE VAL
1.02 .02 1.02
1.02 .02 1.04
1.02 .02 1.06
1.02 .02 1.08
2 .1 2
2 .1 2.1
2 .1 2.2
2 .1 2.3
2 .1 2.4
2 .1 2.5
2 .1 2.6
2 .1 2.7
2 .1 2.8
2 .1 2.9
2 .1 3
15 rows selected.--
Joe Fuda
http://www.sqlsnippets.com/ -
Need help with SQL for Pie Chart
I am trying to create a pie charge which would have 3 slices.
Following sql gives me the expected values when I run the sql command:
select
round(avg(CATEGORY_1 + CATEGORY_2 + CATEGORY_3 + CATEGORY_4 + CATEGORY_5),0) "OD Engagements",
round(avg(CATEGORY_6 + CATEGORY_7 + CATEGORY_13),0) "Talent Engagements",
round(avg(CATEGORY_8 + CATEGORY_9 + CATEGORY_10 + CATEGORY_11 + CATEGORY_12),0) "Other Engagements"
from OTD_PROJECT
where STATUS in ('Open','Hold')
I get 3 columns labeled: OD Engagements, Talent Engagements and Other Engagements with the correct averages based on the the data.
I have tried several ways to try to get this to work in the SQL for a pie chart, but keep getting the invalid sql message and it won't save. I also tried saving without validation, but no data is shown on the chart at all.
I want to have a pie, with 3 slices, one labeled OD Engagements with a value of 27, one labeled Talent Engagements with a value of 43 and one labeled Other Engagements with a value of 30. Then I want to be able to click on each pie slice to drill down to a secondary pie chart that shows the breakdown based on the categories included in that type.
Since I am not grouping based on an existing field I an unsure what the link and label values should be in the chart sql.You'll need something like the below. I have no idea what the URL for the drilldown needs to be. It should create an appropriate link in your app for the particular slice. Mainly the code below breaks the SQL results into three rows rather than three columns. It may well have a syntax error since I can't test.
select linkval AS LINK,
title AS LABEL,
calc_val AS VALUE
FROM (SELECT 'OD Engagements' AS title,
round(avg(CATEGORY_1 + CATEGORY_2 + CATEGORY_3 + CATEGORY_4 + CATEGORY_5),0) AS calc_val,
'f?p=???:???:' || v('APP_SESSION') || '::NO:?' AS LINKVAL
from OTD_PROJECT
where STATUS in ('Open','Hold')
UNION ALL
SELECT 'Talent Engagements' AS title,
round(avg(CATEGORY_6 + CATEGORY_7 + CATEGORY_13),0) AS calc_val,
'f?p=???:???:' || v('APP_SESSION') || '::NO:?' AS LINKVAL
from OTD_PROJECT
where STATUS in ('Open','Hold')
UNION ALL
SELECT 'Other Engagements' AS title,
round(avg(CATEGORY_8 + CATEGORY_9 + CATEGORY_10 + CATEGORY_11 + CATEGORY_12),0) AS calc_val,
'f?p=???:???:' || v('APP_SESSION') || '::NO:?' AS LINKVAL
from OTD_PROJECT
where STATUS in ('Open','Hold')
); -
Hi
How can I convert a string in 'YYYYMMDD' format to SQL date (in the format 'YYYYMMDD' and also 'YYYYMMDD HH24:MI:SS) to use in my SQL-query (for Oracle Database)
Thanks for your help
Praveen PadalaHi
How can I convert a string in 'YYYYMMDD' format to
SQL date (in the format 'YYYYMMDD' and also 'YYYYMMDD
HH24:MI:SS) to use in my SQL-query (for Oracle
Database)
Thanks for your help
Praveen PadalaI've done quite a few dates from Java to an Oracle DB. I like to use the SimpleDateFormat class in the java.text packadge.
import java.text.*; //Use this import
//In your class
SimpleDateFormat myDateFormatter = new SimpleDateFormat("dd-MMM-yyyy", new Locale("en","US"));
//The .format method returns a string. With this format string it can be included in an SQL command
myDateFormatter.format(myDate);
You can also use a SimpleDateFormat object with the format of your other dates to get a Date object that can be given to the SimpleDateFormat object set up to format in Oracle compatable form.
myDate is an object of type Date.
Hope this helps!
Kevin. -
Help with SQL query invloving time operations
I have created 2 tables in my SQL. One is the user_info table which stores the time of login and timezone of login for each user. The other is the post_table which stores the postid, user who makes the post, time of post and timezone for each posts.
CREATE TABLE user_info
user_id VARCHAR(20),
login_date DATE,
login_time_zone VARCHAR(20),
PRIMARY KEY (user_id)
CREATE TABLE post_table
post_id VARCHAR(20),
user_id VARCHAR(20),
datepost DATE,
time_zone VARCHAR(20),
PRIMARY KEY (post_id),
FOREIGN KEY (user_id) REFERENCES user_info(user_id) ON DELETE CASCADE
) ;Some sample data for my tables is as below -
INSERT INTO user_info VALUES( 'u1', to_date('9/17/2009 20:00','MM/DD/YYYY mi:ss'), -2 );
INSERT INTO user_info VALUES( 'u2', to_date('9/17/2009 19:55','MM/DD/YYYY mi:ss'), -4 );
INSERT INTO post_table VALUES( 'p1', 'u1', to_date('9/17/2009 20:50','MM/DD/YYYY mi:ss'), 6 );
INSERT INTO post_table VALUES( 'p2', 'u2', to_date('9/17/2009 20:30','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO post_table VALUES( 'p3', 'u2', to_date('9/18/2009 6:00','MM/DD/YYYY mi:ss'), 2 );
INSERT INTO post_table VALUES( 'p4', 'u1', to_date('9/17/2009 21:00','MM/DD/YYYY mi:ss'), -3 );I need to write an SQL query which - finds the user(s) whose time difference between the login time and the latest time when he/she writes a post is the smallest. I need to consider the timezones here as well.
I am unsure if time_zone should be of type VARCHAR or TIMESTAMP so have created it as VARCHAR in my tables.
Someone please help me form this query.
PS : How do I user <code> tags in this forum to write sql statements.
Edited by: user11994430 on Oct 9, 2009 5:59 PMI tried with the following test data
INSERT INTO user_info VALUES( 'u1', to_date('9/17/2009 20:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u2', to_date('9/16/2009 13:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u3', to_date('9/18/2009 15:00','MM/DD/YYYY mi:ss'), 0 );
INSERT INTO user_info VALUES( 'u4', to_date('9/20/2009 17:00','MM/DD/YYYY mi:ss'), 0 );
INSERT INTO user_info VALUES( 'u5', to_date('9/14/2009 3:00','MM/DD/YYYY mi:ss'), -3 );
INSERT INTO user_info VALUES( 'u6', to_date('9/15/2009 6:00','MM/DD/YYYY mi:ss'), -3 );
INSERT INTO user_info VALUES( 'u7', to_date('9/16/2009 7:00','MM/DD/YYYY mi:ss'), 0 );
INSERT INTO user_info VALUES( 'u8', to_date('9/17/2009 8:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO user_info VALUES( 'u9', to_date('9/18/2009 9:00','MM/DD/YYYY mi:ss'), 0 );
INSERT INTO user_info VALUES( 'u10', to_date('9/19/2009 10:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u11', to_date('9/20/2009 11:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO user_info VALUES( 'u12', to_date('9/21/2009 19:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO user_info VALUES( 'u13', to_date('9/1/2009 4:00','MM/DD/YYYY mi:ss'), -3 );
INSERT INTO user_info VALUES( 'u14', to_date('9/22/2009 7:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u15', to_date('9/24/2009 23:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u16', to_date('9/25/2009 11:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u17', to_date('9/26/2009 18:00','MM/DD/YYYY mi:ss'), -4 );
INSERT INTO user_info VALUES( 'u18', to_date('9/27/2009 13:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO user_info VALUES( 'u19', to_date('9/17/2009 18:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO user_info VALUES( 'u20', to_date('9/29/2009 22:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO user_info VALUES( 'u21', to_date('9/30/2009 5:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO user_info VALUES( 'u22', to_date('9/15/2009 7:00','MM/DD/YYYY mi:ss'), -4 );
INSERT INTO user_info VALUES( 'u23', to_date('9/16/2009 17:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO user_info VALUES( 'u24', to_date('9/17/2009 19:00','MM/DD/YYYY mi:ss'), 0 );
INSERT INTO user_info VALUES( 'u25', to_date('9/18/2009 22:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO user_info VALUES( 'u26', to_date('9/19/2009 15:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO user_info VALUES( 'u27', to_date('9/20/2009 23:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO post_table VALUES('p1', 'u26', to_date('9/14/2009 18:00','MM/DD/YYYY mi:ss'), -5 ) ;
INSERT INTO post_table VALUES('p2', 'u2', to_date('7/1/2009 15:00','MM/DD/YYYY mi:ss'), 1 ) ;
INSERT INTO post_table VALUES('p3', 'u2', to_date('7/20/2009 20:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO post_table VALUES('p4', 'u5', to_date('7/20/2009 22:00','MM/DD/YYYY mi:ss'), 1) ;
INSERT INTO post_table VALUES( 'p5', 'u2', to_date('7/21/2009 10:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO post_table VALUES( 'p6', 'u8', to_date('8/1/2009 20:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO post_table VALUES( 'p7', 'u10', to_date('5/3/2009 15:00','MM/DD/YYYY mi:ss'), -3 ) ;
INSERT INTO post_table VALUES( 'p8', 'u25', to_date('9/15/2009 20:00','MM/DD/YYYY mi:ss'), -5 ) ;
INSERT INTO post_table VALUES( 'p9', 'u6', to_date('9/7/2009 19:00','MM/DD/YYYY mi:ss'), -3 ) ;
INSERT INTO post_table VALUES( 'p10', 'u10', to_date('7/22/2009 10:00','MM/DD/YYYY mi:ss'), 1 ) ;
INSERT INTO post_table VALUES( 'p11', 'u9', to_date('7/7/2009 13:00','MM/DD/YYYY mi:ss'), 0) ;
INSERT INTO post_table VALUES( 'p12', 'u2', to_date('7/30/2009 11:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO post_table VALUES( 'p13', 'u10', to_date('7/22/2009 8:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO post_table VALUES( 'p14', 'u6', to_date('5/30/2009 23:00','MM/DD/YYYY mi:ss'), 1 );
INSERT INTO post_table VALUES( 'p15', 'u3', to_date('5/31/2009 2:00','MM/DD/YYYY mi:ss'), 0 ) ;
INSERT INTO post_table VALUES( 'p16', 'u12', to_date('6/20/2009 7:00','MM/DD/YYYY mi:ss'), -8 ) ;
INSERT INTO post_table VALUES( 'p17', 'u20', to_date('6/20/2009 9:00','MM/DD/YYYY mi:ss'), -8) ;
INSERT INTO post_table VALUES( 'p18','u27', to_date('9/15/2009 11:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO post_table VALUES( 'p19','u26', to_date('7/1/2009 20:00','MM/DD/YYYY mi:ss'), 0 ) ;
INSERT INTO post_table VALUES( 'p20', 'u25', to_date('7/2/2009 17:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO post_table VALUES( 'p21', 'u27', to_date('7/3/2009 20:00','MM/DD/YYYY mi:ss'), 1) ;
INSERT INTO post_table VALUES( 'p22', 'u2', to_date('9/15/2009 13:00','MM/DD/YYYY mi:ss'), 1 ) ;
INSERT INTO post_table VALUES( 'p23', 'u21', to_date('5/30/2009 17:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO post_table VALUES( 'p24', 'u25', to_date('8/30/2009 20:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO post_table VALUES( 'p25', 'u18', to_date('9/13/2009 18:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO post_table VALUES( 'p26', 'u11', to_date('9/9/2009 13:00','MM/DD/YYYY mi:ss'), -8 );
INSERT INTO post_table VALUES( 'p27', 'u23', to_date('9/10/2009 1:00','MM/DD/YYYY mi:ss'), -5 );
INSERT INTO post_table VALUES( 'p28', 'u22', to_date('9/10/2009 14:00','MM/DD/YYYY mi:ss'), -4 );The output I get is
USER_ID
u25
u9
u20
u5
u27
u8
u21
u23
u22
u26
u10
USER_ID
u3
u12
u18
u2
u6
u11
17 rows selected. -
I was wondering if anyone could assist me with a sql loader question. I am trying to use it to load some hungarian data. It always turns the Ő character into an upside down question mark. I understand that it is because the database doesn't understand what that character is. Can someone help me find out how to resolve that? The database is UTF-8. If I paste the character in to the application, it takes it. I have tried all kinds of values for the CHARSET parameter in the .ctl file, but nothing seems to work....
I believe we are using iso 8859-1.
I am having a difficult time thinking that it could be we don't have the appropriate installations - because if we paste it in via the oracle application we can see it in a form. If we sql load that same character and look at it in an oracle form, we see the question marks.
i've looked at many of the documents on tahiti already, and found some helpful stuff, but no explanation for the problem i am having so far :) -
Need help with SQL*PLUS!
I have just downloaded Oracle SQL*PLUS 9.2.0.1.0, but I am not able to use it because I do not know the "password", "user name" and "host string". Can anybody please help me with this.
Thanks in advance,
FloThe username & password are your database username & password. If you don't know these, you'll have to contact your DBA to find out.
The host string is the alias you used when you configured TNS on your machine. Most commonly, this is done by editing the tnsnames.ora file or by using Net8 Easy Config.
Justin
Distributed Database Consulting, Inc.
www.ddbcinc.com
Maybe you are looking for
-
I have 2 itunes accounts, one is my US account that I'm using for quite some time and mostly to download free apps and the other account is a non US account i created to purchase an app from appstore with a local (outside US) VISA card details. Can i
-
This has happened to me about 4 times now. I just got a brand new iPod touch (5th Gen) after replaceing my 4th gen iPod. I havent had it for a month, so its still brand new. The only thing is that when the iPod runs out of battery it turns off (duh)
-
Greying out some selection options on a selection screen
Hi, selection-screen: begin of block b0 with frame title text-000. select-options so_vnum for lfa1-lifnr. parameters: r1 radiobutton group rad1 default 'X', r2 radiobutton group rad1, r3 radiobutton group rad1. parameters: p_n
-
I have run the report ME2J and exported it to an excel - The total value of under column difference Value is OK. But when I do a total of all the listed values in the colum it is showing only a lesser value !!! Need some help to find out why this occ
-
TechTool Pro 5.0.4 Quirky Behavior
For anyone who is experiencing quirky TTP5 behavior (including high CPU), you may find that an uninstall (use installation disk) and reinstall (all in safe mode) will restore normal performance. This is shared with you following my experience as desc