Physical vs Logical length issues

I am importing a fixed field flat file, and the file has a line with a special character in it. The datastore column in question is set to have a physical and logical length of 40 but ODI is erroring out saying the actual value is 41 characters. If I increase the logical length to 41 characters, but leave physical alone, the interface successfully loads the file to my relational destination.
Have I introduced any potential issues down the line by having my logical length greater than my physical length?

I think you changed the physical to 41 and leaving the logical to be 40.
I dont think you will cause any issues down the lane.
This concept is very valid in situations where you would have multi-byte characters which will physically take more bytes to load even though the logical length is less

Similar Messages

  • Timestamp in Data Store defaults to Logical Length of 13

    I inserted a data store in designer, the table has fields that are TIMESTAMP(6) WITH TIMEZONE
    The Data Store column Logical Length defaults to 13.
    The execution fails with ORA-30088 because the temp table ddl is TIMESTAMP(13) WITH TIMEZONE
    I fixed the issue by changing the 13 to 6 on each column.
    What I want to know is how can I change the default to 6?
    Or why is ODI not picking it up properly?
    Is there a fix for this?

    Hi,
    Goto Topology ------> Physical architecture-----> Oracle ----> expand data types and edit TIMESTAMP WITH TIME ZONE
    See if the following are specified against "Create Table Syntax" and " Writable Datatype Syntax"
    TIMESTAMP(%L) WITH TIME ZONE
    if yes , then edit the same and remove (%L) from it .
    i.e. it will become TIMESTAMP WITH TIME ZONE
    Save it and execute your interface .
    Thanks,
    Sutirtha

  • OBIEE -10.1.3.4.1 - high physical and logical query response

    Hi All,
    I am facing an performance issue in OBIEE 10g .My report is taking 2 mins to come up and when i fired the physical query in the db the data is coming in 2 secs.
    Below is the details from the log file.Here I observed that response time for physical and logical query is 109 sec ~ 2 mins.Please provide me the helpful pointers.
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Execution Node: <<2650466>>, Close Row Count = 3332, Row Width = 26000 bytes
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Execution Node: <<2650466>> DbGateway Exchange, Close Row Count = 3332, Row Width = 26000 bytes
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Execution Node: <<2650466>> DbGateway Exchange, Close Row Count = 3332, Row Width = 26000 bytes
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Query Status: Successful Completion
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Rows 3332, bytes 86632000 retrieved from database query id: <<2650466>>
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Physical query response time 109 (seconds), id <<2650466>>
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 109, DB-connect time 0 (seconds)
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Rows returned to Client 3332
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Logical Query Summary Stats: Elapsed time 109, Response time 109, Compilation time 0 (seconds)

    Did you run the SQL from a client on the OBIEE server or your local machine? Does the Physical SQL on the OBIEE server against the DB run in 2 Seconds and when sent by the OBIEE server it takes 109 seconds?? Is that correct?

  • Timestamp column  when reversed in ODI ,Logical Length increases to 11

    Hi
    I have a Timestamp Column in Oracle Database. When i see in SQL Developer
    I see DataType: Timestamp(6),But when I reverse in ODI the Logical length Increases
    to 11 and this gives an error when I execute my interface.
    Like that I have Many timestamp column in my project and for the Interface to work
    I have to manually decrease the length from 11 to 6,then it works fine.
    Is there any workaround.
    Thanks in Advance.

    Hi,
    Trying to help you. :-)
    Try to use Datatypes options in ODI. Go to Topology mgr->Physical Arch, elapse Oracle and just play around Data types (try to create a datatype for timestamp or else edit the existing one)and give TIMESTAMP(%L).
    May be you can find a solution.
    All the Best.:)

  • Finding of the Logical and Physical filename,Logical and Physical Path

    Hello All
    Where and how can I find the below details in an SAP server
    Logical filename:
    Physical filename:
    Logical path:
    Physical path:
    Regards
    Kalyani

    hi
    Physical file is what you see from the OS level.
    Logical file is what ABAP code can call certain functions to read/write.
    Transaction FILE would link them together. Typically the logical path ends with "<FILENAME>", and the logical file refers to the logical path.
    To extract the physical path from the logical path name
    DATA: lf_mandt TYPE sy-mandt,
    lf_opsys TYPE sy-opsys.
    lf_mandt = sy-mandt.
    lf_opsys = sy-opsys.
    To extract the physical path from the logical path name
    CALL FUNCTION 'FILE_GET_NAME'
    EXPORTING
    client = lf_mandt
    logical_filename = p_unix
    operating_system = lf_opsys
    IMPORTING
    file_name = gwa_input
    EXCEPTIONS
    file_not_found = 1
    OTHERS = 2.
    IF sy-subrc EQ 0.
    Concatenating the physical path and the input unix file name
    CONCATENATE gwa_input p_file INTO gf_file .
    ENDIF.
    You need to tak ehelp of ABAPer for this
    Check the link
    http://help.sap.com/saphelp_nw04/helpdata/en/fc/eb3deb358411d1829f0000e829fbfe/frameset.htm
    Regards

  • Separation of the physical and logical structures

    Hi,
    I am very new to Oracle database administration. While reading Sam Alapati's book "Expert Oracle9i Database Administration," I came across the concept of the separation of an Oracle database’s physical storage structures from its logical storage structures. In particular, Sam states the following in his book:
    “This logical defining of Oracle's database structure has another fundamental motive behind it. By organizing space into logical structures and assigning these logical entities to users of the database, Oracle databases achieve the logical separation of users (owners of the database objects, such as tables) of the database from the physical manifestations of the database in terms of data files and so forth.”
    I am not quite convinced about the value this separation of the physical and logical really adds to the task of database administration. Considering the way DBASE worked, i.e. each table used to be stored as a separate file, what would be lost if Oracle’s implementation were similar and each table (i.e. file) were to be assigned to a particular user. I am not sure of the value added by storing the data from more than one table in more than one file, effectively resulting in a many-to-many relationship between tables and files. Please enlighten me. I would really appreciate it.
    Karim

    and each table were to be assigned to a particular user. Don't know what you mean. In Oracle, every table has one and only one owner.
    I am not sure of the value added by storing the data from more than one table in more than one fileIf an application has a thousand tables, would you rather manage 1000 files or 1?
    In general, separating the physical from the logical allows the physical structure to change without affecting the logical (in theory at least). Even a table is a logical structure. We think of rows and columns, but it isn't stored the way we think of it. When we do a select statement, we don't have to write code to read each block, extract the contents, etc.
    With partitioned tables, it is sometimes a good idea to split up partitions in such a way to get a performance gain. Like placing the most recent (and most queried) month of data on the fastest storage device. If you stuffed everything in to one gigantic file, you would lose that ability.
    If you want to store each table as a separate file, you can do that with Oracle. For each new table, create a new tablespace, and then create a new file for the tablespace. Then come back to this forum in a year and tell us how it's going.

  • How to collect physical and logical disk counters using query?

    Hai friends, i want to view physical and logical disk counters in sql server like Avg. Disk sec/Read, Avg.
    Disk Bytes/Read, Avg. Disk sec/Write, Avg.
    Disk Bytes/Write, etc.,  Can anyone tell me how to vies this by using query?
    Thanks in advance..

    Hai friends, i want to view physical and logical disk counters in sql server like Avg. Disk sec/Read, Avg.
    Disk Bytes/Read, Avg. Disk sec/Write, Avg.
    Disk Bytes/Write, etc.,  Can anyone tell me how to vies this by using query?
    Thanks in advance..
    Hello,
    Sys.dm_os_performance counter will only show counters related to SQL server not Physical disk if you run below query in SQL server it will not return any value so no disk counter is present.You will have to see it using perfmon.
    select * from sys.dm_os_performance_counters where counter_name like '%disk%'
    This can also be done through power shell but I dont have experience with that.You can search net for power shell query to see Windows perfmon counters
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Streams Replication:Source database Physical or Logical Standby DB

    Can the source database in streams replication be a physical or logical standby database ? If so, is the process of configuring streams the same as a regular database ? Are there any best practices or different configuration if the source is Logical or Physical standby DB ?
    Thanks in advance.

    Never done it, but I don't see any reason why it should not work.
    Streams, at capture site, is only a data dictionary game and in a logical standby your data dictionary is open read write.
    Streams, at capture site, never touch the source tables, in fact they may even not exists from Streams point of view,
    as it deals only with the redo that are generated.
    So Streams horizon is limited to the data dictionary, the log buffer, the archives and, in SYSAUX tablespace, all the LOGMNR_% tables. All these structures are read write in the logical standby. However, for the capture/propagation you may have to set to true the 'include_tagged_lcr' parameters.

  • Query on Physical Architecture,Logical Architecture and Model

    Hi Experts,
    I have a confusion regarding Physical Architecture,Logical Architecture and Model.Please tell me what type of information or data these above three hold.
    Thanks

    Physical architecture contains information on the physical setup of your environment i.e. server names, jdbc connection strings, usernames, Passwords etc.
    The logical architecture provides a layer of abstraction that allows you to group via contexts similar Physical architecture components which reside in different locations/environments.
    Models are the reversed representations of the objects in your physical architecture i.e. tables, flat files etc. Models are used as sources and targets in your interface design

  • Physical Vs Logical Partitioning

    We have 2 million records in the sales infocube for 3 years. We are currently discussing the pros and cons of using Logical partitioning Vs Physical Partitioning. Please give your inputs.

    hi
    there are two types of partitioning generally talked about with SAP BW, logical and physical partitioning.
    Logical partitioning - instead of having all your data in a single cube, you might break into separate cubes, with each cube holding aspecific year's data, e.g. you could have 5 sales cubes, one for each year 2001 thru 2005.
    You would then create a Multi-Provider that allowed you to query all of them together.
    A query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, running at the same time. The system automatically merges the results from the 5 queries into a single result set.
    So it's easy to see when this could be a benefit. If your queries however are primarily run just for a single year, then you don't receive the benefit of the parallel processing. In non-Oracle DBs, splitting the data like this may still be a benefit by reducing the amount of rows in the fact table that must be read, but does not provide as much value to an Oracle DB since Infocube queries are using a Star_Transformation.
    Physical Partitioning - I believe only Oracle and Informix currently support Range partitioning. This is a separately licensed option in Oracle.
    Physical partitioning allows you to split an Infocube into smaller pieces. The pieces, or partitions, can only be created by 0FISCPER or 0CALMONTH for an InfoCube (ODSs can be partitioned, but require a DBAs involvement). The DB can then take advantage of this partitioning by "pruning" partitions during a query, e.g. a query only needs data form June 2005
    The DB is smart enough to restrict the indices and data it will read to the June 2005 partition. This assumes your query restricts/filters on the partitioning characteristic. It can apply this pruning to a range of partitions as well, e.g. 0FISCPER 001/2005 thru 003/2005 would only look at the 3 partitions.
    It is NOTsmart enough, however, to figure out that if your restrict to 0FISCYEAR = 2005, that it should only read 000/2005 thru 016/2005 since 0FISCYEAR is NOT the partitioning characteristic.
    An InfoCube MUST be empty in order to physically partition it. At this time, there is no way to add additional partitions thru AWB, so you want to make sure that you create partitions out into the future for at least a of couple of years.
    If the base cube is partitioned, any aggregates that contain the partitioning characteristic (0CALMONTH or 0FISCPER) will automatically be partitioned.
    In summary, you need to figure out if you want to use physical or logical partitioning on the cube(s), or both, as they are not mutually exclusive.
    So you would need to know how the data will be queried, and the volume of data. It would make little sense to partition cubes that will not be very large.
    physical partitioning is done at database level and logical partitioning done at data target level.
    Cube partitioning with time characteristics 0calmonth Fiscper is physical partitioning.
    Logical partitioning is u partition ur cube by year or month that is u divide the cube into different cubes and create a multiprovider on top of it.
    logical Vs physical partitions ?

  • Export physical and logical details on ASA 5520 and 8.0 software

    Hello...does anybody know if there is any way to export details of the physical and logical interface details (including interface descriptions) to Excel, PDF or and other format from the command line or ASDM? 
    Thanks,
    John

    Export directly in xls, xlsx or pdf - no.
    The output of "show run interface" or "show interface" is pretty structured however and easily parsed by Excel - either manually or via a macro. See output below (you can omit the interface identifier to get all interfaces. I used one for brevity.)
    One can build a script to log in, perform an arbitrary command logging the output to a file which can then be massaged to extract the information you want in a suitable format (csv, etc.). Once in Excel it can be saved as pdf if you're so inclined.
    Of couse, some of the full-featured network management tools do a lot of this (and lots more) if you have them.
    ASA-1# sh run int eth0/0
    interface Ethernet0/0
    nameif outside
    security-level 0
    ip address x.x.x.x 255.255.255.224
    ASA-1#
    ASA-1# sh int eth0/0
    Interface Ethernet0/0 "outside", is up, line protocol is up
      Hardware is i82546GB rev03, BW 1000 Mbps, DLY 10 usec
    Auto-Duplex(Full-duplex), Auto-Speed(1000 Mbps)
    Input flow control is unsupported, output flow control is unsupported
    MAC address 0013.c480.6b50, MTU 1500
    IP address x.x.x.x, subnet mask 255.255.255.224
    14156274 packets input, 16095096189 bytes, 0 no buffer
    Received 44764 broadcasts, 0 runts, 0 giants
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
    0 pause input, 0 resume input
    0 L2 decode drops
    8548524 packets output, 1006461151 bytes, 0 underruns
    0 pause output, 0 resume output
    0 output errors, 64 collisions, 6 interface resets
    95 late collisions, 627 deferred
    0 input reset drops, 0 output reset drops, 0 tx hangs
    input queue (blocks free curr/low): hardware (255/230)
    output queue (blocks free curr/low): hardware (255/125)
      Traffic Statistics for "outside":
    14156267 packets input, 15839536990 bytes
    8548619 packets output, 820243613 bytes
    39502 packets dropped
          1 minute input rate 2 pkts/sec,  349 bytes/sec
          1 minute output rate 2 pkts/sec,  425 bytes/sec
          1 minute drop rate, 0 pkts/sec
          5 minute input rate 2 pkts/sec,  2091 bytes/sec
          5 minute output rate 1 pkts/sec,  352 bytes/sec
          5 minute drop rate, 0 pkts/sec

  • Changing ip of physical and logical host

    I have a 2 nodes Sun cluster 2.2 working. I have to put it in another lan, so I have to change both Physical and logical host and terminal concentrator and console ip addresses. How can I do?
    Thanks

    I don't see a reply so I'll take a shot. Just like you have 0-9 for numbers and you can arrange them anyway you want, you still only have 0-9. You have the physical (system block and datafiles) which are somewhat static. Then you have the the logical (tablespace, segements, extents)which are volatile and connected by chaining (links) which may be all over the place in the physical but appears to be one whole unit in the logical. Just like an image on a TV screen wearing a red hat, you see a red hat (logical) but it is actual pixels which are chained or linked (data pointers) to each other by the color red. I hope that helps until someone comes by with a better answer.

  • Whar are logical table, Physical table Logical data Source

    Hi,
    Can any one explain me in details what Whar are logical table, Physical table Logical data Source.
    Any qucik help will be graetly appreciated

    In OBI there are three layers - Physical, Business Model and Mapping (BMM) and Presentation.
    As the name specifies the Physical layer mainly contain physical aspect of the application like which connection to use, which schema (also catalog in case of SQL server) to connect and also which table to use. This layer confirms the PK_FK joins for the related tables. This layer mainly depicts how the data has been stored in the database layer.
    On top of this layer you will have you BMM layer. The place where all work of a developer starts. You will structure the tables accourding to the business need. The structure has to be a STAR schema. All the entities in this layer are called logical because they do not directly represent any database object rather they provides a logical mapping to the database entities. This becomes clear when you use more that one Logical Table Source (LTS) for your logical tables. One logical column can map to N number of physical columns based on context. You can also create calculative columns in this layer which are totaly logical in nature.
    I am not writing anything on Presentation layer as it is not in you question. :)
    Hope this will help.
    Regards,
    Somnath

  • Difference between physical and logical standby database

    What is the difference between physical and logical standby database?

    Hi,
    Physical Standy where its a read only DB.
    Logs are applied.
    Logical Standy where it can be Read / Write DB and the logs are applied in terms of SQL Statements.
    Thanks & Regards,
    Pavan Kumar N

  • Physical standby, logical standby which one shud i prefer?

    Hi ,
    how to decide which standby to create that is whether Physical or Logical.
    I want to create it for database failover. so which one i shoul prefer?
    pls give ur suggestions.
    Thanks,
    Kuamr.
    Message was edited by:
    user548258

    Physical standby is the easiest to setup and administer and supports all datatypes. You can use it for reporting (if it's rarely req'd), except for you would have to cancel the recovery, open the db is a read-only mode, and then put it back in the recovery mode.
    Logical standby is a pain, but you can run reports at anytime without canceling recovery. logical does not support certain datatypes, like IOT's, BLOBs, etc.

Maybe you are looking for

  • IWires Mini DisplayPort to HDMI and PHILIPS TV

    Hi, Problem: Connection between MacBook Air and Philips TV via iWires MDP (Mini DisplayPort) to HMI, after a while the image became green. I have searched for the problem i found a lot of threads but nobody has gave an answer. Is it possible to solve

  • Issue with XML in BLOB

    Hi, As part of my project I am using BLOB to store some XML we receive in files. We normally use NCLOB, but we might need to store binary files as well so we are using BLOB now. What have noticed is that if a put an XML in the BLOB, then when I retri

  • Safari 3 auto-fill different behavior?

    It seems that in earlier versions when I went to a password login page that was stored in auto-fill, the number would automatically fill in when loading the page. Now it seems that I need to at least enter the first character into the login field and

  • Source Timecode lost when exporting

    Hi, just a quick question: Hwo do I keep the sourcetimecode of a clip when I export it? I have a bunch of Red clips which I need to stabilize but when I export them I lose the source timecode of my clips, why?

  • Execute Process Task - Error :The process exit code was "2" while the expected was "0".

    I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2  and updating the old data from db1 to db2. For this I am making use of the 'Tablediff' utility which when provided with serv