samedi 31 janvier 2015

Getting sum result for last year devided quarterly


I am trying to get the financial data grouped by each quarter of last year and also grouped by last three weeks.


Also if it is possible to have the type as separate column with the type as the header.


This is the query I am using right now which is giving the result for last month.


Select to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'MM') ||'-'|| to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'YYYY') AS Month,



case
when (ap.asset = 2 or AP.ASSET_TYPE not like 'CFTD%' and
AP.ASSET_TYPE not like 'CASH%' and AP.ASSET_TYPE != 'FTL') then
'AUM'
when (AP.ASSET_TYPE like 'CFTD%') then
'Depostis'
when (AP.ASSET_TYPE like 'CASH%') then
'Cash'
when (AP.ASSET_TYPE = 'FTL') then
'Loan'
end as Type,

sum(ABS(AP.Eval_Market_Value)) as Total


from (select p.account, p.open_date as Open_Date, ac.description as RM, s.*



FROM k$portfolio p, k$client k
LEFT JOIN k$client_role cr
ON cr.client = k.id
AND cr.role = 136
LEFT JOIN k$actors ac
ON cr.actor = ac.id,
table(gtw$reporting.Statement(p.id,
2,
trunc(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1)),
trunc(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1)),
1,
1002,
1,
'USD')) s
wHERE s.line_type = 'P'
and k.id = p.client
and p.id = s.portfolio
and p.portfolio_type = 'C'
and p.status = 1

) ap


group by to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'MM') ||'-'|| to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'YYYY'),



case
when (ap.asset = 2 or AP.ASSET_TYPE not like 'CFTD%' and
AP.ASSET_TYPE not like 'CASH%' and AP.ASSET_TYPE != 'FTL') then
'AUM'
when (AP.ASSET_TYPE like 'CFTD%') then
'Depostis'
when (AP.ASSET_TYPE like 'CASH%') then
'Cash'
when (AP.ASSET_TYPE = 'FTL') then
'Loan'
end


=====================


Current output


Output


Desired result Final Resualt





Column name or number of supplied values does not match table definition - with correct parrameters


I am executing a stored procedure in a while loop (approximately 100 times). This takes 20 seconds.


The problem is that at some point, I receives this error: Column name or number of supplied values does not match table definition


When I say at some point, I mean that most of the executes are successfully.


But this is not the real problem because the parameters I sent to the stored procedure are 100% OK.


I even catched the error and saw the parameters that were sent and execute the stored procedure without a while loop and it succeeded. Moreover, I saw that every time the error catches different parameters (the execute to the stored procedure fails to different parameters).


Could it be because memory issues?


My code is something like this:



DECLARE @p1 INT = 1;

CREATE TABLE #MyTable
(
Column1 INT,
Column2 NVARCHAR(255)
);

WHILE (@p1 < 100)
BEGIN
INSERT #MyTable
EXEC dbo.MyStoredProcedure @p1;

SET @p1 = @p1 + 1;
END

SELECT * FROM #MyTable;

IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL DROP TABLE #MyTable;




Socket problems after upgrade


We just upgraded from 5.1 to 5.5 and seem to be having some weird socket issues. Here is the data I think is needed, let me know if you guys have any ideas.


First, here's the error message we're getting:



150201 1:32:59 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306
150201 1:32:59 [Note] - '0.0.0.0' resolves to '0.0.0.0';
150201 1:32:59 [Note] Server socket created on IP: '0.0.0.0'.
150201 1:32:59 [ERROR] Can't start server : Bind on unix socket: Permission denied
150201 1:32:59 [ERROR] Do you already have another mysqld server running on socket: /var/run/mysqld/mysqld.sock ?
150201 1:32:59 [ERROR] Aborting


So naturally we went through and checked read/write permissions and as you can see the settings are set correctly, and no mysqld.sock file exists



$: ls -ld mysqld
drwxr-xr-x 2 mysql root 40 Feb 1 00:55 mysqld
$: ls -a mysqld
. ..


We are using a different data directory and so we saw this thread: http://ift.tt/1D0r32c But after making the correct changes to apparmor it still does not seem to work.


/etc/apparmor.d/usr.sbin/mysqld



/usr/sbin/mysqld {
/sc/mysql/ r,
/sc/mysql/** rwk,
...
/var/run/mysqld/mysqld.pid w,
/var/run/mysqld/mysqld.sock w,
}


And finally, here is the my.cnf log:



[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]
user = mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /sc/mysql
tmpdir = /tmp
skip-external-locking
bind-address = 0.0.0.0
key_buffer = 16M
max_allowed_packet = 24M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M

[mysqldump]
quick
quote-names
max_allowed_packet = 16M

[mysql]

[isamchk]
key_buffer = 16M

!includedir /etc/mysql/conf.d/


Also, although I knew it wouldn't work since mysqld isn't up and running, we did attempt to run mysql_upgrade to see what errors were coming back and this is what we got:



$: mysql_upgrade
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed


Any help would be greatly appreciated =) Lemme know if anymore information is needed.





Is there an easy way to update `connection strings` to a `database` moved to a different instance of `SQL`


I have created a new instance of SQL and exported my database to the new instance. Then I realized that the connection string for all of my datasets still point to the original database. Is there an easy way to update the connection string for each of the datasets.


This also brings a question on deployment. Each customer that purchases the software would have a different connection string. Can you point me to some documentation that would explain how to handle this scenario?





Error configuring SQL Server as Distributor


I'm trying to configure replication on a new install (moving to SQL 2012) and I keep getting the same error even after messing with the service name, folder permissions, etc...


Using the Configure Distribution Wizard, I'm getting the following error:


SQL Server could not configure 'SQL01' as a Distributor Additional information: An exception occured while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.Connectioninfo)


Destination path E:\MSSQL\Data is not valid. Unable to list directory contents. Specify a valid destination path. Changed database context to 'master'. (Microsoft SQL Server, Error: 14430)


I logged in as the local Administrator. I connected to SQL using the local Administrator account. I tried running SQL Server/Agent as NETWORK SERVICE as well as making a local user and using the local user. The local user has full permissions to E:\MSSQL\Data folder which is where all the databases are kept.


Windows Server 2008 R2 SQL Server 2012 SP2


Thank you, Adrian





Dropped connections, corrupted schema ... is MySQL sick?


Lot's of problems and I wonder if it's all tracable to some root cause. I'm trying to load a simple schema related to product marketing. The symptoms are:



  • Not being able to forward engineer said schema mysql -u root -p dbeq < dbeq.sql due to loosing connection with the server. 11kb file, 13 tables, each table has at most 13 columns, and the sql file is the output of MySQLWorkbench, so pretty much gauranteed to be syntax error-free. It does SET foreign_key_checks=0 at the start, but I've checked the file table by table; it creates them all in the correct order so that no mismatched foreign key dependencies occur.

  • Frequent **ERROR 2013 (HY000) at line 47: Lost connection to MySQL server during query type messages, even when pasting CREATE TABLE statements at the mysql shell (which are 50ms queries at maximum). Max packet is 16Mb and write timeout is one minute, so those can't be the constraints.

  • The database becomes corrupted by the last table each time (can't create table because it exists, can't drop it because it doesn't).

  • MySQL workbench can't forward engineer directly to the database either; the connection to the MySQL server gets dropped in under a second.


Do I have something wrong with MySQL? I'm going to try creating the schema on a different machine at home (Unfortunately SQLFiddle is down so I haven't been able to test the schema there).


Output of mysqlcheck -u root -p dbeq (noting that it took 3 goes to create this many tables)



dbeq.AB_test_questions OK
dbeq.AB_test_responses_v0 OK
dbeq.ABs OK
dbeq.business_units OK
dbeq.customers OK
dbeq.normalization_survey_customers
Error : Table 'dbeq.normalization_survey_customers' doesn't exist
status : Operation failed
dbeq.normalization_survey_questions OK
dbeq.product_test_questions OK
dbeq.product_test_responses_v0 OK
dbeq.products OK
dbeq.project_managers OK
dbeq.survey_hashtable OK
dbeq.surveys OK


Output of mysqldump -d dbeq



-- MySQL dump 10.13 Distrib 5.5.41, for debian-linux-gnu (x86_64)
--
-- Host: localhost Database: dbeq
-- ------------------------------------------------------
-- Server version 5.5.41-0ubuntu0.14.04.1

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
mysqldump: Got error: 1146: Table 'dbeq.normalization_survey_customers' doesn't exist when using LOCK TABLES




Why the simple insert into query allocated a lot of tempdb?


I'm using sp_whoisactive to monitor the tempdb and found the following query allocated 30,000,000 tempdb_allocations (BTW, what's the unit of the allocations?)



select a.*, b.A, b.C, b.D
into c
from db1.dbo.a -- a, which has 2M rows, is a heap and don't have a index on column bid
left join db2.dbo.b on a.bid = b.id -- b has only several hundred rows. clustered PK (id)


Why it allocated so many tempdb space? What's the unit of tempdb_allocations?





mysqlbinlog recovery table wise


I am going to restore my MySQL database table using log files. My database name is "decsys" .


I have already created sql using log files


C:\wamp\bin\mysql\mysql5.5.8\bin>mysqlbinlog -d decsys C:\wamp\bin\mysql\mysql5.5.8\data\mysql-bin.0000xx > sql1.sql (file size 8GB)


I want recover not whole database. only single table. (table name :cm).


please tell me how can i do this. thank you very much





Is unique index better than unique constraint when I need an expression index


PgSQL docs say:



The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly. One should, however, be aware that there's no need to manually create indexes on unique columns; doing so would just duplicate the automatically-created index.



Based on this, if I want an expression index on a column and also want that column to be unique, would case 2 below be better since it can accomplish the above with a single index. Whereas case 1 would have an index created automatically because of a unique constraint and another one because I need a lower case index?


Case 1:



CREATE TABLE book (
id SERIAL PRIMARY KEY,
name text NOT NULL,
CONSTRAINT book_name_key UNIQUE (name)
);

CREATE INDEX book_name_like ON book (lower(name));


Case 2:



CREATE TABLE book (
id SERIAL PRIMARY KEY,
name text NOT NULL
);

CREATE UNIQUE INDEX book_name_like ON book (lower(name));




Best practices for selecting partition boundary ranges to maintain empty partitions at both end?


I have partitioned the table based on a date datatype partitioning column.


The data for the column starts at "2014-01-01" and ends at "2015-01-01". I know that, the partitions should have empty partitions at left and right ends. So, I selected the following boundary ranges while creating the partition function. The partition function is shown below,



create partition function PfDateRange(date)
as range right for values('2014-04-01','2014-07-01','2014-10-01','2015-01-01')


I would like to hear some tips or good practices for selecting boundary ranges.


Also, at future the data may get inserted which were greater than the last boundary range. So how to maintain empty partition at the right end?


Thanks in advance.





Email Column Constraint to Contain '@' character


How do you create an inline constraint for a Email Address Field that requires there be an "@" character? Pretty much I want there to be a restriction to the field so that the data entered must have the "@" within it. I'm using Oracle SQL Developer 12c.





Connecting/Synching MYSQL with SQL Anyqhere 10


We have a database on our network that uses SQL Anywhere 10 with and ODBC connector as a database for customer and product information. Our intention is to sync this database with a newly created MYSQL database. I'm fairly new to working with databases, and am having an incredibly difficult time finding clear documentation on how to accomplish this.


I've installed the ODBC connector for MYSQL, but am a little lost on how to proceed. It seems as if this connector is intended for creating a database in MYSQL that has an ODBC connector itself, not necessarily on connecting with a remote ODBC connected DB.


If anyone has any information or can indicate a clear direction (or even a great place to start) it would be really helpful.


Kind Regards





SQL Server: Invalid Log Scan Number (...) passed to log scan in database (...) is not valid


We are getting a large number of the below errors on one of our primary production databases. CheckDB is completely clean.


We thought it was related to replication so we shut it down using sp_removedbreplication and the problem went away. For other reasons, we did a restore from last night backup which resurrected replication again. Immediately the errors returned and have persisted despite my removing replication again.



  • Error: 9003, Severity: 20, State: 1.

  • The log scan number (390576:5234:44) passed to log scan in database 'XXXX' is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup.


I'm not finding much on the web related to this except articles related to the master database and SQL being unable to start and replication issues.


Any advice would be tremendously appreciated!





Design Database for Wholesales


Hi i want to design a database of mysql for website for many about thousand wholesales that store stuffs and value of them please help me





Mysql keep shutdown after few restart on fresh Centos installation


I have restart a few times - systemctl stop mysql and systemctl start mysql on my Centos 7. After some time, all my application on this machine will face this error



SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111 "Connection refused")


when I check the status it seems fine



[$]systemctl status mysql
mysql.service - LSB: start and stop MySQL
Loaded: loaded (/etc/rc.d/init.d/mysql)
Active: active (exited) since Thu 2015-01-29 21:46:25 EST; 1 day 14h ago

Jan 29 21:46:24 xxx systemd[1]: Starting LSB: start and stop MySQL...
Jan 29 21:46:25 xxx mysql[13121]: Starting MySQL. SUCCESS!
Jan 29 21:46:25 xxx systemd[1]: Started LSB: start and stop MySQL.
Jan 31 11:49:59 xxx systemd[1]: Started LSB: start and stop MySQL.


but when try to login using mysql -u root -p the same error will come out. Is there any setting to configure after installation of MariaDB 10.0.16. Let me know what to tune for application like WordPress. For your information I only have 512MB of RAM and total of 20GB harddisk


thanks





SQLite 3 Tree List Without CTE


I adapted a SQL statement to work with SQLite 3. But unfortunately Common Table Expressions and WITH clause works only in SQLite 3.6 ahead, but the users of my Open Source application are using SQLite 3.2 and I cannot force them to update the whole Linux system to get the new packages. Is it possible to adapt the code to work without using a CTE and "With" Clause using only SQL Language?


Here's the code:



WITH
cte AS
(SELECT 0 AS level, collectionID, collectionName, parentCollectionID, CAST(collectionID AS VARCHAR(128)) AS Sort
FROM collections WHERE parentCollectionID IS NULL
UNION ALL
SELECT p.level + 1, c.collectionID, c.collectionName, c.parentCollectionID, CAST(p.Sort || '/' || CAST(c.collectionID AS VARCHAR) AS VARCHAR(128))
FROM collections c
INNER JOIN cte p ON p.collectionID = c.parentCollectionID)
SELECT
collectionID,
printf('%*s', level * 4, '') || collectionName AS collectionName,
Sort,
parentCollectionID
FROM cte
ORDER BY Sort;


Here's the result:



collectionID collectionName Sort parentCollectionID
1 Dissertação 1 0
10 Filosofia Reformacional 10 0
11 Dooyeweerd 11 0
14 ZotPad favorites 14 0
15 Diversos 15 0
2 Bíblia 2 0
3 Políticas Públicas 3 0
4 Zotero 4 0
5 Linux 5 0
6 Tese Doutorado 6 0
12 Pontal Do Paraná 6/12 6
7 Multimodal 6/7 6
13 Modalidades 6/7/13 7
8 Base Histórica 6/7/8 7
9 Artigo Weber 9 0


Thank you so much, Best regards, Christian





Have a separate database for data warehousing?


I have built a typical web application using PHP and MySQL for enterprise usage.


The MySQL database is 3NF as much as possible.


However, as time goes by, the requests for the data appear to be more for a data warehouse so that they can slice and dice the data in whatever way they want.


Hence I picked up a book the 3rd Edition Data Warehouse Toolkit of the Kimball Group.


Inside the authors described a Bus Matrix where I keep my 3NF database and I have a separate database which is meant for data warehouse which of course is designed in a star schema.


My question is, since I am most familiar with PHP and MySQL, should I have the data warehouse star schema in a separate database from the original 3NF database?


No commercial BI tool will be used. Anything that looks like an OLAP or BI would likely be an open source software that I may augment myself.


Please advise.





How to add External data source into MySQL?


I want to add ODBC data source in MYSQL. Is that possible ?Which is the best to perform this ?





How to get all documents from a particular chunks in MongoDB?


From mongos log, I got the following message



balancer move failed: { chunkTooBig: true, estimatedChunkSize: 408839903, ok: 0.0, errmsg: "chunk too big to move" } from: shard4 to: shard1


I know this issue occurs due to either



  1. documents size to large,

  2. index key not granular enough, and/or

  3. chunkSize too small


While I'm confident that case 2 and 3 are not the cause, I want to see whether someone abuses the database by inserting extra large documents.


The mongos log shows the docs are in the range :



min: { Country:"US", City:"LA" } max: { Country:"US": City:"NY" }


I want to aggregate the schema of documents within this range, and see if some documents contain extra fields.


Since the indexes are not monotonically increasing, I can't get all documents within this range (or can i?).


How can I



  1. get the chunkID with the query

    db.coll.find({ Country:"US", City:"LA" })




  2. get all the docs within this chunk?


Other suggestions are welcome. Thanks in advance



Application is hanging when database upgraded from SQL Server 2000 to SQL Server 2008 R2


Presently we are working with SQL Server 2000, the database is perfectly working with our software for retail sales counter application developed with VB 6.0. But after upgraded to SQL Server 2008 R2, we are witnessing timeouts. We didn't change code and database structure when we upgraded from 2000 to 2008 R2.


This is the Locking Function for getting max number from table LockFile



Private Function LockDatabase()
Dim cntr As Long
Dim indx As Integer
Dim rec_Lock As New ADODB.Recordset

'On Error GoTo CheckReTry

ReTry:
'Set rec_Lock = db_kshetr.OpenRecordset("select LockSemaphore, BillNo as bno from LockFile", dbOpenDynaset, , 2)
rec_Lock.Open "Select LockSemaphore, BillNo as bno from LockFile", con, adOpenDynamic, adLockPessimistic
TryGetLock:
If rec_Lock.Fields("LockSemaphore") <> gCounter & "Locked" Then
While (rec_Lock.Fields("LockSemaphore") <> "UnLocked")
' For indx = 0 To 10 + Rnd(25)
' Next indx
rec_Lock.Requery
Wend

'rec_Lock.Edit

If rec_Lock.Fields("LockSemaphore") = "UnLocked" Then
rec_Lock.Fields("LockSemaphore") = gCounter & "Locked"
rec_Lock.Update
rec_Lock.Requery
Else
rec_Lock.CancelUpdate
GoTo TryGetLock
End If
End If

While (rec_Lock.Fields("LockSemaphore") <> gCounter & "Locked")
For indx = 0 To 10 + Rnd(25)
Next indx
rec_Lock.Requery
Wend

counter = Val(rec_Lock.Fields("bno") & "")
Set rec_Lock = Nothing
Exit Function
CheckReTry:
MsgBox "Cannot Lock Database", vbCritical + vbOKOnly, "Security"
Exit Function
End Function


This is the UnLocking Function for getting max number from table LockFile



Private Function UnlockDatabase()

Dim cntr As Long
Dim indx As Integer
Dim rec_Lock As New ADODB.Recordset

On Error GoTo CheckReTry

ReTry:
'Set rec_Lock = db_kshetr.OpenRecordset("select LockSemaphore, BillNo as bno from LockFile", dbOpenDynaset, , 2)
rec_Lock.Open "Select LockSemaphore, BillNo as bno from LockFile", con, adOpenDynamic, adLockPessimistic
TryGetLock:
While (rec_Lock.Fields("LockSemaphore") <> gCounter & "Locked")
For indx = 0 To 10 + Rnd(25)
Next indx
rec_Lock.Requery
Wend
'rec_Lock.Edit
rec_Lock.Fields("LockSemaphore") = "UnLocked"
rec_Lock.Update
rec_Lock.Requery

While (rec_Lock.Fields("LockSemaphore") <> "UnLocked")
For indx = 0 To 10 + Rnd(25)
Next indx
rec_Lock.Requery
Wend
Set rec_Lock = Nothing
Exit Function
CheckReTry:
MsgBox "Cannot UnLock Database", vbCritical + vbOKOnly, "Security"
Exit Function
End Function


LockSemaphore Field Stages are Locked and UnLocked


Because we're having multiple clients, if one user locked the Lockfile table and get the maximum BillNo , an another user tries to get maximum BillNo from the same table at the same time, the whole systems will hang including Server(SQL Server 2008 R2).


How can we solve this problem? Thanks in advance.





database_quiestion_determine_2NF_3NF


I solved this example to determine 2NF and 3NF. I would like to check if my answer is correct. Can somebody check the 2NF and 3NF of this example, please?



R( A B C D E F G H I J)


with functional dependencies:



AB -> C
A -> DE
B -> F
F -> GH
D -> IJ


My result:



2NF:
R1 (AB -> C , A -> E)
R2 (B -> F , F -> GH)
R3 (D -> I)

3NF:
R1 (AB -> C , A -> E)
R2 (B -> F)
R4 (F -> GH)
R3 (D -> I)




Loose database structure


I'm just thinking out loud. I'm wondering if there's any database technology where it's structure does not need to define upfront?


For example it may start off with just one column or roll and user will keep adding more column and roll or table on the fly when they see necessary. The relationship between this rolls and column will just work.





finding text in a string


I am trying to find records where the post code starts with a certain set of characters from some 10,000 records, 1300 or so of which meet the select below. My code is


Select * from table where mid(postcode,1,2 ) IN ("CV DE LE MK NN")


This returns a zero result A typical postcode would be something like CV1 9AD


If I write


Select * from table where mid(postcode,1,2 ) = "CV"


I get the full list of records for post codes beginning with "CV" I would have thought that the IN statement would work. Can anyone tell me why it does not?





vendredi 30 janvier 2015

MYSQL Service Automatically not respond after some time when 5 people using that connection


MYSQL Stop After some time when no of user connected to MySQL server what is problem I don't know is that MySQL problem because in java I connect properly.


my error is some time I need to manually restart MySQL service after time it work perfect after 10 min same problem exist


please help me





getting an error in trigger while invoking a procedure in it


i have created a procedure name 'prd' which contains 3 parameters(pmat_amt,pstartdate,pstopdate) which is used to update a column in the table 'rd_details' contains 3 columns which are mat_amt,startdate,stopdate.. later on i have created one trigger which was.. create trigger rd_trig after update of stopdate on rd_details for each row begin prd(:new.mat_amt,:new.:new.startdate,:new.stopdate); end; /


but i'm getting an error as :bad bind variable :new.mat_amt :bad bind variable :new.startdate :bad bind variable :new.stopdate





Select from two tables with not direct relationship


OK i found myself in a dead end, and i know must be a way but my brain is just about to explode. This is the case: I have two tables with tons of rows, one for works done (lets call it works table), something like this:



ID | Home_Work_ID | task_id | Person_id
1 | 23 | 1 | 30
2 | 23 | 2 | 31
3 | 23 | 3 | 30
4 | 876 | 1 | 31
5 | 123 | 3 | 32


and another table to report the fixes to do on the works mentioned before, lets call it fixes table



ID | Home_Work_ID | Person_reporting_id | Task_id | Details
1 | 23 | 93 | 1 | Fix this
2 | 23 | 85 | 3 | Fix that
3 | 123 | 86 | 3 | Fix something


As we can see, in the fixes table there are home works with fixes reported, and those home works was done by the person_id 30 and 32 (from the work table). The results that im trying to achieve would be:



Person | Fixes
John (lets say this is person id 30) | 2
Kate (lets say this is person id 32) | 1


The problem is, i need to create a report that show me who was the person responsable of the work done in the works table that have reported fixes in the second table and how many fixes was reported for that person. The only link is the home work id and probably the task id, so i was testing something like this



SELECT P.person_name AS Person, COUNT(F.id) AS fixes
FROM fixes F
INNER JOIN homeworks H ON F.home_work_id = H.id
INNER JOIN works as W
INNER JOIN people AS P ON W.person_id = P.id
INNER JOIN tasks AS T ON T.task_id = F.task_id
WHERE F.task_id = W.task_id
AND F.home_work_id = W.home_work_id
AND W.home_work_id IN (SELECT home_work_id FROM fixes GROUP BY home_work_id)
GROUP BY P.person_name ORDER BY fixes DESC


Ok there are three more inner/left joinable tables with the id and name of the person, home work and task. This show me the person responsable but the number of fixes for that person and that home_work/task dont match the ones in the fixes table, i guess im doing wrong inner joining the work table that way but i dont see the light yet. Any help will be really appreciated.


Regards





What is difference between CM mode and NFM mode in DB2 V10.1


What is main difference between CM mode and NFM mode in DB2 V10.1. Please let me know features for NFM mode.





pg_dump in 1G VPS crashes


I'm trying to run pg_dump on a relatively small database (2G), but pg_dump keeps crashing with an out of memory error.


du -h --max-depth=0 9.4 1.9G 9.4


The command: pg_dump -Fp -h "$HOSTNAME" -U "$USERNAME" "$DATABASE" | gzip > $FINAL_BACKUP_DIR"$DATABASE".sql.gz.in_progress;


Any ideas on how to allow the dump to happen?


Thanks





Product Attribute System While Avoiding EAV


I'm working on building an attribute system for products. The issue that I'm running into is that various products can have very different attribute requirements.


Some E-commerce websites such as Magento use an EAV system. That's something that I'd like to avoid due to performance issues, database cleanliness/complexity/control.


So far I'm leaning towards using a variety of tables. For example I might have a table of attributes that pertain to medical devices and then a different table that is for toys and games. Any attributes that are generic would just fall into the actual product table. If I do choose to go with this option, then I suppose my product table would have a column which represents which attribute table is used.


I'm not actually a Database Administrator, so I don't really know what's best. I hope that someone is able to give me some insights into a good implementation or acknowledge that my current thought process is in the right direction.


Thank you





Non-sequential Primary Key performance in PostgreSQL 9.3


Due to requirement to support distributed environment, each of my clients has been assigned unique node id, it appends the node id with running number as primary key and stores at local database. E.g.



Client A node id = 200, first row in a table will have primary key 200,000


Client B node id = 100, first row in a table will have primary key 100,000



These records then replicate to centralized database. Since primary key at centralized database is not in sequence, will it cause any serious performance issue when data size getting bigger?


Possible sequence of inserting new data at centralized database:



200,000
100,000
100,001
200,001
300,000
100,002


This may cause a big performance in SQL Server with table is clustered along the PK. However, will this happens in PostgreSQL 9.3?


Notes:



  1. I can't use composite keys as it does not play well at my presentation layers.

  2. The 3 digits running number is just a simplified example, real running number will be much bigger and sufficient.





Multiple databases or one large database?


My company has a framework where different areas of a region are created as separated databases - we call them datamarts. All of the databases are exactly the same and in some regions we have 20 -30 databases where some or all are included in a particular query. Is there a performance penalty when we perform queries that join these databases or does SQL Server join these cross database queries seamlessly. Some of the databases may have a few million transactions.





Manage multiple short-term tables with same schema or use single table for short-term data?


My team has proposed to store session state data for each open client session in a table in the database. The application server creates a new table for each client session as the user logs in via web browser. The data that needs to be kept for the session is created along with the table, and is not modified (except in rare circumstances). When the user logs out, the table is dropped. Orphan tables are purged regularly.


The application server manages these short-term tables. There won't be any database code that manages the tables or the data.


I'd like to find out the pro's and con's of this design, in contrast to others. For example, all of the same functionality could be implemented in a single table that stored state data for all sessions. If done this way, there would be one clustered index on the session ID. Each access to the state data would read all of the data for exactly one session ID.


What are the scalability concerns for creating/dropping multiple tables rapidly? Assume this occurs on the order of 1K-10K time per hour.





MySqlworkbench asking for convertion


hi everyone i have a question about sql in the programm of mysqlworkbench .I want to convert a er diagramm which i created in word into a database in sql.So Can anybody know to tell me how to connect the gas with the users in the programm .If the correlation in the er will be an array in the programma and how can it be..? here i post the er diagramm and the two arrays in mysql.! thanks in advance..!! :)


enter image description here





This Extended Events session works on Dev but not on Pro. Why? [on hold]


I do have the following Extended Event session on my Dev and Pro boxes:



CREATE EVENT SESSION [sp_showplan] ON SERVER
ADD EVENT sqlserver.query_post_execution_showplan(SET collect_database_name=(1)
ACTION(sqlserver.plan_handle)
WHERE ([package0].[equal_uint64]([object_type],(8272))
AND [sqlserver].[equal_i_sql_unicode_string]([object_name],N'MyStoreProcedure')))
ADD TARGET package0.event_file(SET filename=N'E:\DBA_Audit\SP_Exec.xel',
metadatafile=N'E:\DBA_Audit\SP_Exec.xem')
WITH (MAX_MEMORY=4096 KB,
EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,
MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,
STARTUP_STATE=OFF)
GO


Its main purpose is capture the execution plan for a given store procedure (any database).


I tested it on Dev and it works. I called the store procedure from two different databases and I was able to capture the execution plan.


Then I went to the live box but it does nothing. And yes, the store procedure was called there too.


Any reason why is not collecting the information on PRO? Anything that I should look at?


EDIT 1:


The session has ben created. I started it and it can generate the file. But the session runs and saves nothing. There are no permission issues at server level it's just that the session saves nothing.


EDIT 2:


Yes, I know this Event can be expensive. With doing nothing I mean, it captures no plan. I successfully created the EV session at SQL instance level, of course.


EDIT 3:


I'm Jose, the OP. Not sure why my q. was put on hold. To me, my question is clear. Above T-SQL for the Extended Event session captures Execution Plan of store procedures on my Dev SQL instance. When the same Extended Event session is created and is running on Pro, it does not capture actual execution plan. I want to know why! A bug maybe??? Maybe the T-SQL script that was generated from Dev via GUI added a line of code that breaks the Extended Event session on PRO? I have not created the Extended Event session on PRO via GUI. That was how it created it on Dev. I just script it out.





Facing SQL Error 3403


Recently I was working in SQL Server and I faced an error called SQL Error 3403 which was stating:



"Error 3403, Severity 22 During recovery initialization, page %ld was encountered. This page belongs to object %ld, not the log"



Can anyone help me get rid out of it by giving reliable solutions.





Can MySQL log lock waits?


Is it possible to have MySQL log queries or transactions where lock waits occur? Once can debug specific application errors generated from lock wait timeout errors from the DB but without the ability to see the occurrences of lock waits that don't hit the timeout are occurring, it seems difficult to anticipate where the next timeout error may be lurking.


Is there a way to have MySQL log these?





How to connect to Oracle Database using SQL Plus?


I need to connect to a oracle database on a remote host (I have no oracle installed locally) using SQL Plus, I've already installed SQL Plus and looking on the internet got those way to connect but no one of them works on my case.


Some times I got this error:



ORA-12170: TNS:Connect timeout occurred


This is the way I'm trying to connect to oracle



sqlplus64 name_user/password@//remote_host:port/database_nae




How to write a query which recursively returns all foreign key references to a column?


I need to obtain a list of foreign keys which refer to a given column, as well as all foreign keys which refer to those keys, and so on. Order does not matter. I have



SELECT
s.name,
t.name,
c.name,
fk.name
FROM
sys.foreign_key_columns fkc
INNER JOIN sys.foreign_keys fk
ON fk.object_id = fkc.constraint_object_id
INNER JOIN sys.columns c
ON c.object_id = fkc.parent_object_id
AND c.column_id = fkc.parent_column_id
INNER JOIN sys.tables t
ON t.object_id = fkc.parent_object_id
INNER JOIN sys.schemas s
ON s.schema_id = t.schema_id
WHERE
fkc.referenced_object_id = OBJECT_ID(N'dbo.MyTable')
AND fkc.referenced_column_id = (
SELECT TOP 1
column_id
FROM
sys.columns
WHERE
name = 'MyColumn'
AND object_id = OBJECT_ID(N'dbo.MyTable')
)


However, this just returns the foreign keys which immediately reference the column, not the entire tree of foreign keys referencing this column.





After applying SQL Server 2012 SP2, Full-Text Filter Daemon won't restart


Before applying SP2 to SQL Server 2012, I stopped the following four processes in Sql Server Configuration Manager under SQL Server Services:



Sql Server SQL
Full-Text Filter Daemon
SQL Server Agent
SQL Server Browser


SP2 install completed successfully without error.


I went back in to restart the above processes, but the Full-Text Filter Daemon won't restart.


Is this a known issue, and how to fix it?





Sql Server - Running a Job on Multiple servers


I am using SqlServer 2008.


I have a Sql Server Job that checks Disk Space and emails admins if the space is less than 15%


Instead of adding that Job to all servers is it Possible to run the Job on all servers ?


(I have those servers added as Linked Servers)





Unable to find how a table is being populated


Using SSMS2014 w/ SQL Server 2008 R2. I have a table that I am trying to find out how and from where data inside it is coming from. (I didn't create the table and am unable to contact the person who did) I've tried the following:




  1. RMB on the table -> View Dependencies. Result: This comes back empty.enter image description here




  2. I've tried running the following code:


    select object_name(object_id) as DatabaseName, last_user_update, * from sys.dm_db_index_usage_stats where database_id = DB_ID('PackardIntranet') and object_id=object_id('tbl_Johnstone_Packard_SalesReport')


    Result: This just shows NULLs and 0's except when I queried the table previously




  3. I've tried using RedGate's SQL Search using the table name or a column name as a parameter. Result: The only result is the table itself




Am I missing something or can it be that this table was just created and populated once by the person who created it? Or can this be part of a script that is used to create and populate this table? Is there any way I can attempt to find out more about this table?





Sql Server Change Data - Insufficient Arguments


I am trying to get Sql Server Change Data configured and am running into problems trying to query the tables.


My setup:



create table Person (
Id uniqueidentifier PRIMARY KEY DEFAULT NewSequentialId(),
FirstName varchar(255),
LastName varchar(255),
Age int,
AddressId uniqueidentifier
);

exec sys.sp_cdc_enable_table @source_schema='dbo',
@source_name='Person',
@role_name='AuditUserX',
@supports_net_changes = 1


And my query to get the data out of the tracking tables:



declare @Min binary(10), @Max binary(10)
set @Min = sys.fn_cdc_get_min_lsn('dbo.Person')
set @Max = sys.fn_cdc_get_max_lsn()

select @Min, @Max
select * from cdc.fn_cdc_get_net_changes_dbo_Person(@Min, @Max, N'all') //fails

GO


The last line before the GO fails with the following error:



Msg 313, Level 16, State 3, Line 18
An insufficient number of arguments were supplied for the procedure or function
cdc.fn_cdc_get_net_changes_ ... .




du of directories in a diskgroup in Oracle ASM


In Linux, you could run the below on a directory to find the size of each sub-directory:



du -sh *


When you're in asmcmdin the +FRA disk group for example, how can I retrieve the size of each directory in the disk group in one list, instead of running du +FRA/directory on each and every single directory?





Postgres - Inject 0 values when rows do not exist


I have 3 tables that I need for reporting:



*dates*
date_sk | full_date | day_number_of_month
1 | 2013-01-01 | 1
2 | 2013-02-01 | 1
3 | 2013-03-01 | 1
4 | 2013-02-02 | 2
5 | 2013-02-03 | 3

*person*
person_sk | person_id | person_name
1 | 10 | John
2 | 11 | Bob
3 | 12 | Jill



*person_portfolio*
person_portfolio_sk | date_sk | person_sk | res_value | report_month
1 | 1 | 1 | 15 | 2013-01-01
2 | 1 | 2 | 10 | 2013-01-01
3 | 1 | 3 | 1 | 2013-01-01
4 | 2 | 1 | 30 | 2013-02-01

(imagine the 'dates' table filled with every date for the past 10 and next 10 years which includes every day of every month. My reporting is for a monthly level which is why i need day_number_of_month = 1 for the first of every month)


I have been struggling to find out, for comparison reporting purposes using a date range, how to replace no entries during that timeframe with 0 values for the person. Here is the query I have tried:



SELECT
p.person_id,
COALESCE(pp.res_value,0)::NUMERIC(16,2) AS res_value,
pp.report_month
FROM person p
LEFT JOIN person_portfolio pp
ON p.person_sk = pp.person_sk
LEFT JOIN date d
ON d.date_sk = pp.date_sk
WHERE person_id IN ('10','11','12')
AND pp.report_month >= '2013-01-01' --From Date
AND pp.report_month <= '2013-05-01' -- To Date
AND d.day_number_of_month = 1
ORDER BY p.person_id DESC;


The output I want to return would end up being 15 rows total. 3 people x 5 months of data = 15 total rows. It should look like this:



person_id | res_value | report_month
10 | 15 | 2013-01-01
10 | 30 | 2013-02-01
10 | 0 | 2013-03-01
10 | 0 | 2013-04-01
10 | 0 | 2013-05-01
11 | 10 | 2013-01-01
11 | 0 | 2013-02-01
11 | 0 | 2013-03-01
11 | 0 | 2013-04-01
11 | 0 | 2013-05-01
12 | 1 | 2013-01-01
12 | 0 | 2013-02-01
12 | 0 | 2013-03-01
12 | 0 | 2013-04-01
12 | 0 | 2013-05-01

but I am only getting these results:



person_id | res_value | report_month
10 | 15 | 2013-01-01
10 | 30 | 2013-02-01
11 | 10 | 2013-01-01
12 | 1 | 2013-01-01

So basically... is there currently a feasible way that I could inject the 0 value rows into the results when there is no entry for the 'report_month' for a specific person(s)? I would appreciate any kind of help as I have been working on this for 2 weeks now trying to complete this report. Thanks!





How to get best mark for each Student


I can't find a solution although it should be easy one.


I have some students and all I need is to get the highest mark of each one in Math.



TableName: Students.
Columns:
Name Mark
John 90
John 84
John 92
John 60
Yan 75
Yan 88
Yan 60


I want to get this results:



Name Mark
John 92
Yan 88


I have tried to do top 1, but it gives me only one student. I need to do somehow top 1 on each student, but I don't know how to do that.


Please help me to figure it out.


Thanks in advance.





Find out beforehand how many records a query has


I develop reports in ExtJS resulting of queries in SQL Server 2012.


I use Ext.Grid with paging, therefore I can use offset to limit the amount of records a query will return. But for this ExtJS feature to work, I still must provide it the total amount of records the giving query has.


Some reports use filters and data changes during day, because of that I can only know the total count by running the query. This results in running it twice: once with count(*) to get total count and again to get proper data.


Is there a way to find the total count of a query, even when it has offset command, without having to run it twice as I'm doing now, and without looping through all records?





Return List of Database Users and Server Logins


How can I return a complete listing of all server logins and all database users across my enterprise? Need to include the remote server name, database name, AD account and SQL users.





Count of 3 consecutive unsynced dates


I have a table set out like so:



user_id date minutes_asleep
----------------------------
1 2015-01-01 480
1 2015-01-02 0
1 2015-01-03 0
1 2015-01-04 0
1 2015-01-05 321
1 2015-01-06 0
1 2015-01-07 0
1 2015-01-08 0
2 2015-01-01 567
2 2015-01-02 0
2 2015-01-03 285
2 2015-01-04 0
2 2015-01-05 577
2 2015-01-06 0
2 2015-01-07 0
2 2015-01-08 0


I need to find the count of 3 consecutive dates where minutes asleep is 0. So in the example above user_id 1 would have a count of 2, user_id 2 would have a count of 1. The total count would be 3. So the returning result would be:



total_count
-----------
3




Soccer league ER diagram


I am planning on creating a database to track teams and players in a soccer league. The stats I wish to track are as follows...


goals scored/conceeded(gk), assists, minutes played, cards received and leading scorers.


As well as overall standings and fixtures and results. I have created an ER diagram and I think I have all the required tables but have some m:n relationships which I know shouldn't exist so am hoping my relationships are correct enter image description here


I may or may not include the second tire of this league otherwise I don't think I actually need the league table.


Am I anywhere near the right track? Any help would be appreciated.





Using MongoDB with circular object references


Please forgive me from the beginning. I am brand new to document oriented databases and I am sure that I will need guidance to reform the question in order to get answers.


I am looking to use MongoDB for an application built in php. As part of the object specification, some objects reference their parents. From what I have read, this is an issue with performing updates on a document in Mongo, as it stores a copy of the object in the page rather than a reference. I understand that this provides for a significant performance increase on read, but that it can cause a significant penalty on writes/updates as each copy of the object needs to be updated. Also, if it stores a complete copy of the parent object, then does that not cause infinite recursion?


Maybe I am missing something of how Mongo manages the references, or maybe I am doing something wrong with the schema design, but it seems like I am missing something.


Thank you in advance for your guidance and advice!





Does my MSDN License allow me to use SQL Server Dev on a dev machine? [on hold]


Complete new guy question.


Can I install SQL Server Dev version on a machine I use for development?


The Dev Machine is a virtual box that I am building and testing an intranet site for my office.


I am currently using SQL Server Express, but want to start using things like DBMail, and think I want to put the full dev version on the Dev Machine.


I've viewed the following thread: When is it appropriate to use SQL Server Developer Edition?


and am confused.


I have a MSDN lisc, but what if I'm or my boss is on the intranet site (still in Beta) that receives queries from the db on my SQL Server is that a requirement for an individual license?


Andy





PostgreSQL Primary key disappears from test table


I have a somewhat convoluted scenario where a test table I had created with a primary key won't return the primary key. pgAdmin III reports there are no constraints. I have the entire PostgreSQL query log and below have the query I used to create the test table. I then dropped the primary key on a different test table and used the query generated (it's not a query I have manually run, yet) to help me search for pgAdmin III dropping the primary key on the table in question and found nothing searching for:



ALTER TABLE public.delete_key_bigserial DROP CONSTRAINT


The string 'DROP CONSTRAINT' only appears once in the query log dating back to 2014-12-02 which is weeks before I even created the test tables. I now understand that a primary key may or may not be set to bigserial or serial and have even created a table without a primary key set id to integer and then set id to be the primary key (another can of worms for whole 'nother day).


In an earlier question I inquired about how to fetch the data_type including if it was bigserial or serial to which Erwin Brandstetter had an excellent answer. He provided two queries in particular, one to fetch the data_types for all the columns and one to fetch the data_type for the primary key. Unfortunately one of the test tables I have been testing with isn't returning any results.



SELECT a.attrelid::regclass::text, a.attname,
CASE a.atttypid
WHEN 'int'::regtype THEN 'serial'
WHEN 'int8'::regtype THEN 'bigserial'
WHEN 'int2'::regtype THEN 'smallserial'
END AS serial_type
FROM pg_attribute a
JOIN pg_constraint c ON c.conrelid = a.attrelid AND c.conkey[1] = a.attnum
JOIN pg_attrdef ad ON ad.adrelid = a.attrelid
AND ad.adnum = a.attnum
WHERE a.attrelid = 'delete_key_bigserial'::regclass
AND a.attnum > 0
AND NOT a.attisdropped
AND a.atttypid = ANY('{int,int8,int2}'::regtype[]) -- integer type
AND c.contype = 'p' -- PK
AND array_length(c.conkey, 1) = 1 -- single column
AND ad.adsrc = 'nextval('''
|| (pg_get_serial_sequence (a.attrelid::regclass::text, a.attname))::regclass
|| '''::regclass)';


The query works perfect on all the other tables.


I've only been working with PostgreSQL since November 2014 and MySQL since circa 2011 so the best thing I can do AFAIK is to fetch as much relevant data as I can. Here is the query used to create the delete_key_bigserial table from the query log:



CREATE TABLE public.delete_key_bigserial (id bigserial PRIMARY KEY NOT NULL)
WITH (OIDS = FALSE);


I simplified Erwin's query and used it on the table to compare the results in my query tool to different test tables that the query works perfectly fine on (on all four data_types):



SELECT * FROM pg_attribute a
WHERE a.attrelid = 'delete_key_bigserial'::regclass
AND a.attnum > 0
AND NOT a.attisdropped
AND attname='id'
ORDER BY a.attnum;

+----------+---------+----------+---------------+--------+--------+----------+-------------+
| attrelid | attname | atttypid | attstattarget | attlen | attnum | attndims | attcacheoff |
+----------+---------+----------+---------------+--------+--------+----------+-------------+
| 46390 | id | 20 | -1 | 8 | 20 | 0 | -1 |
+----------+---------+----------+---------------+--------+--------+----------+-------------+

+-----------+----------+------------+----------+------------+-----------+--------------+
| atttypmod | attbyval | attstorage | attalign | attnotnull | atthasdef | attisdropped |
+-----------+----------+------------+----------+------------+-----------+--------------+
| -1 | f | p | d | t | t | f |
+-----------+----------+------------+----------+------------+-----------+--------------+

+------------+-------------+--------------+--------+------------+---------------+
| attislocal | attinhcount | attcollation | attacl | attoptions | attfdwoptions |
+------------+-------------+--------------+--------+------------+---------------+
| t | 0 | | | | |
+------------+-------------+--------------+--------+------------+---------------+


Erwin is deriving the type via the atttypid column when other conditions are met however the resulting column/row is identical to other tables that work. There is another catalog table I've used in my attempts to determine what the data_type of the primary key is so I decided to compare the results from that table as well via the following query:



SELECT * FROM information_schema.columns
WHERE table_schema NOT IN ('pg_catalog', 'information_schema')
AND table_name='delete_key_bigserial'
AND is_nullable='NO';


The only difference for any returned column/row returned (besides the table name in the table_name and column_default columns) was the dtd_identifier column. The table delete_key_bigserial returns the column dtd_identifier with the value 20, for a working table the query returns 1. The (bottom of the) PostgreSQL element_types documentation describes the column as:



An identifier of the data type descriptor of the element. This is currently not useful.



I am guessing this is a deprecated/older fashion that is kept for legacy purposes though it could simply refer to the description itself? I'm not sure but this is where I am and I'm not even certain I'm on the right path.


I'd rather deal with the issue and learn from the scenario then disregard it simply because it's a test table as one day I'm certain I'll have to deal with this issue when it's not a test table. I'll be happy to update my question with relevant information that may help track down what the issue is.





Best practices for generating a tnsnames.ora entry


I need to integrate some scripts that export and import data from Oracle databases. They are shell scripts that rely on the TNS name being set up (e.g., scott/tiger@tns).


The application owners are likely not going to change the scripts to support EZConnect syntax, so my best option right now is to automate the TNS name setup. The file, for reference, sits here:



$ORACLE_HOME/network/admin/tnsnames.ora


Is there already a best practice / existing utility for doing this? Or am I just going to have to write a shell script to pump out the task?





MDF file full - cannot find the source of its giant size


All,


I am completely at a loss here and I'm hoping you guys can help me out on this one. I am experiencing the following situation on an SQL Server 2008 R2 database.


MDF-file: Allocated space: 87.392,69 MB Available space: 256,13 MB


The LDF file is only a few megabytes, most of which are available. So as you can see my database is pretty full at the moment. I tried looking for the source of this, but here is where things get "weird".


When check the top tables by usage the top table was only 3 GB big, followed by a few of 1 GB. I've summarized all those totals and came to something around +- 20 GB of data. There's a giant gap here... and I'm not sure where it's coming from. I've been googling for the last few hours hoping to get some pointers, I've checked if any BLOB data was causing this but that seems to be not the case as far as I can see.


Am I missing something here? Did anyone experience something like this before? I can't find anything related to this so I'm completely at your mercy.


If you need any specific details I'll add them when requested.


Thanks in advance





Issues with Optimizing TSQL Script


I have a temp table that has a SecurityCode column. This SecurityCode column gets updated based off value from a CrossReference table.


The value selected from the CrossReference table depends on how much data that row contains.


So the first update matches on CustomIdentifier. The 2nd update checks if CrossReference table CounterpartyID col has a value and updates relevant to that row. The 3rd update checks if CrossReference table CounterpartyID and AccountCode columns have values and updates relevant to that row.



INSERT into @ManagerData
SELECT
ManagerID,
SecurityManager,
SecurityAccount,
null as SecurityCode,
Broker,
AssetType,
InvestmentType,
Description,
Currency,
sum(Quantity) as Quantity
FROM
dbo.ManagerData
WHERE (ManagerID = @ManagerID)
AND (Account = @Account)
GROUP BY ManagerID, Account, Broker,AssetType,InvestmentType,Description, Currency


-- Update with Code Cross Reference By Instrument Identifier Only
Update @ManagerData
Set SecurityCode = CrossReference.GenevaCode
From @ManagerData MD
Inner Join CrossReference
On MD.ManagerID = CrossReference.CustomIdentifier
WHERE CrossReference.CounterPartyID = ''
AND CrossReference.AccountCode = ''

-- Update with Code Cross Reference By CounterParty & Instrument Identifier
Update @ManagerData
Set SecurityCode = CrossReference.GenevaCode
From @ManagerData MD
Inner Join CrossReference
On MD.ManagerID = CrossReference.CustomIdentifier
AND MD.SecurityManager = CrossReference.CounterPartyID
WHERE CrossReference.AccountCode = ''

-- Update with Code Cross Reference By CounterParty, CounterParty Account & Instrument Identifier
Update @ManagerData
Set SecurityCode = CrossReference.GenevaCode
From @ManagerData MD
Inner Join CrossReference
On MD.ManagerID = CrossReference.CustomIdentifier
AND MD.SecurityManager = CrossReference.CounterPartyID
AND MD.SecurityAccount = CrossReference.AccountCode


How do I improve this script into a single query if possible? I have tried something similar to below...



SELECT
ManagerID,
SecurityManager,
SecurityAccount,
null as SecurityCode,
Broker,
AssetType,
InvestmentType,
Description,
Currency,
sum(Quantity) as Quantity
FROM
dbo.ManagerData MD
LEFT Join CrossReference cr
On MD.ManagerID = cr.CustomIdentifier
AND (cr.CounterPartyID = MD.SecurityManager OR cr.CounterPartyID = '')
AND (cr.AccountCode = MD.SecurityAccount OR cr.AccountCode = '')
WHERE (ManagerID = @ManagerID)
GROUP BY ManagerID, Account, Broker,AssetType,InvestmentType,Description, Currency




Duplicate database leaves orphan archivelog files


I am cloning the production database to another one, lets say CLONEDB, in a regular basis. I am using rman duplicate database script like:



run {
duplicate target database to CLONEDB
FROM ACTIVE DATABASE
PASSWORD FILE
SPFILE
SET ....
DB_FILE_NAME_CONVERT=....
LOGFILE
GROUP 1 (
....
) SIZE 100M,
....
;
}


Before running the script I am doing an archivelog rman backup of the CLONEDB (it deletes backed up archivelogs) and DROP DATABASE CLONEDB.


While running duplicate database, rman copies datafiles and archivelogs from the source database. For example, I can see that such file has been copied:



input archived log thread=1 sequence=666 ...
output file name=+DYSK/archivelog/2015_01_16/thread_1_seq_666
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01


Then rman recovers database (via set until scn 666666; recover clone database delete archivelog;), so the archivelog above is applied to the database:



starting media recovery
archived log for thread 1 with sequence 666 is already on disk as file +DYSK/archivelog/2015_01_16/thread_1_seq_666
archived log file name=+DYSK/archivelog/2015_01_16/thread_1_seq_666 thread=1 sequence=666
media recovery complete, elapsed time: 00:00:02
Finished recover at 2015.01.16 16:21:57
Oracle instance started


Everything is OK, database starts up, users are happy. But the archivelog file (in this example thread_1_seq_666) stays on the disk. And because rman changes the database incarnation after duplicate process, this file is never deleted during the backup process, as it belogns to the previous incarnation.


So from time to time I must do the housekeeping and delete old, orphaned archivelogs directly from the disk. Of course I would like to do it automatically, with the duplication script. I could add rm command from asmcmd for the old files, but maybe there is a way to convince rman to do it for me during the duplicate database?





Unable to start SQL Safe service :Error 1053: Could Not Start or Stop the Service in a Timely Fashion


We are facing quite a tedious issue for past couple of months and we are out of ideas now:


We are using IDERA SQL safe to backup the databases for our many server:


IDERA was installed earlier by the engineer who is no more working with us and adding more to our injury he had installed this IDERA safe running a SQL safe backup service using his domain account.


Since his Domain ID is getting disabled, we created a similar or exact replica service account to correct was done as a mistake earlier to run this sql safe backup service:


When we try to run the service with newly created account we get below error:


The error encountered is Error 1053: Could Not Start or Stop the Service in a Timely Fashion.


But when use his domain ID it works:


We had a case IDERA but no luck..


Can anyone help us with any idea or tricks that may work here!





SQL Server 2012 Installation Has No Option To Uninstall


I've recently inherited a laptop for testing from a client, unfortunately it wasn't a fresh image so it has some software already installed that's causing me some grief.


The short version is their IT department has had some issues, I think the only person still working there is the guy that makes the tea, and they've asked me to setup an easy to deploy SQL Server 2012 installation and instance (easy enough, this is scripted and ready to go) the issue is that SQL Server 2012 is already installed on the laptop and there's no option to uninstall it through Add/Remove, so I can't do a test install for them to look at.


I'm thinking that the installation is corrupt, and I'm going to have to either A) Remove it all manually and hope I get all the registry keys and associated files, or B) Send it back to the client and wait for them to send me another laptop (This one's not preferable, like I said they're having a number of issues with their IT department and I'm pretty sure I'm just going to get the same laptop back as it is now)


Am I right in thinking this is a botched/corrupted installation, or have I missed something? It's windows 7 professional, logging in with a local admin account. I've been able to uninstall some other applications.


I checked some other posts on here and it sounds similar to ; How to uninstall SQL server 2012 not showing in control panel's "Programs and features"


Thanks for your opinion. Neil





Best practices for selecting partition boundary ranges to maintain empty partitions at both end?


I have partitioned the table based on a date datatype partitioning column.


The data for the column starts at "2014-01-01" and ends at "2015-01-01". I know that, the partitions should have empty partitions at left and right ends. So, I selected the following boundary ranges while creating the partition function. The partition function is shown below,



create partition function PfDateRange(date)
as range right for values('2014-04-01','2014-07-01','2014-10-01','2015-01-01')


I would like to hear some tips or good practices for selecting boundary ranges.


Also, at future the data may get inserted which were greater than the last boundary range. So how to maintain empty partition at the right end?


Thanks in advance.





How to stop Firebird SuperClassic (run as application) in Win 7?


I run Firebird 2.5.3 SuperClassic as application in Windows 7 64-bit with this command:



fb_inet_server -a -m


Now, how can I stop/shutdown it from command prompt? I know that I could right-click its icon and choose Shutdown, but this is not what I'm looking for.


Thanks in advance.





How to change ibdata1 path in mysql?


My ibdata1 file is keep on increasing in size and for a temporary solution, we want to move the file to a separate location with more hard disk space.


But, when I change the my.cnf file parameters data_dir and innodb_data_home_dir and restart the service, it is not starting and throwing errors like



ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
ERROR! MySQL is not running, but lock file (/var/lock/subsys/mysql) exists
ERROR! The server quit without updating PID file


Please help. I need it immediately as it is the produciton database and the space is almost full.


Here is the configuration file for your reference:



[mysqld]
port=3316
datadir=/data/databases/mysql
socket=/var/lib/mysql/mysql.sock
user=root
slow_query_log = 0
long_query_time = 10
log-output = FILE
sql_mode = STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_UNSIGNED_SUBTRACTION
max_sp_recursion_depth = 40
group_concat_max_len = 30000
max_tmp_tables = 64
key_buffer_size = 64M
max_allowed_packet = 16M
table_open_cache = 256
sort_buffer_size = 8M
read_buffer_size = 8M
join_buffer_size = 8M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 128M
thread_cache_size = 16
query_cache_size = 16M
binlog_cache_size = 8M
thread_stack = 384K
thread_concurrency = 23
innodb_thread_concurrency = 46
innodb_data_home_dir = /data/databases/mysql
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /data/databases/mysql
innodb_buffer_pool_size = 30G
innodb_additional_mem_pool_size = 16M
innodb_log_file_size = 512M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 600
innodb_rollback_on_timeout = 1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[mysqldump]
quick
max_allowed_packet = 16M
[mysqlhotcopy]
interactive-timeout


MySQL version is 5.5.20





SQL Multiple JOINS Interpretation order [on hold]


I have a big problem of using multiple joins in one SQL Statement.


For example i have 10 Tables which they are tbl1,tbl2, tbl3,..,tbl10.


I would like to select some columns from each table from tbl1 till tbl10.


Relations for example are like following:


but if i used JOINS i in between i don't know how to use Left,Right,etc in the SQL Statement from Priority aspect. I mean is it with Parenthesis organized or like what?


Example for the SQL: enter image description here


And my Query is as following



SELECT T1.col1, T2.col2, T3.col3, T4.col1 AS Expr1
FROM T1 INNER JOIN T1_T2 ON T1.id = T1_T2.T1_FK
INNER JOIN T2 ON T1_T2.T2_FK = T2.id
LEFT JOIN T2_T3 ON T2.id = T2_T3.T2_FK
INNER JOIN T3 ON T2_T3.T3_FK = T3.id
LEFT JOIN T3_T4 ON T3.id = T3_T4.T3_FK
INNER JOIN T4 ON T3_T4.T4_FK = T4.id


So how does SQL Server interprate the JOINs ? can one represent it like Execution Priorities with Paranthesis?


I mean: ( (T1 INNER Join T1_T2) INNER JOIN T2 ) LEFT JOIN T2_T3) ?


Thanks alot for feedbacks




Edited for More Explanation:





How to upgrade MYSQL from 5.6.12 to 5.6.21?


I have been a MSSQL DBA and dont have much knowledge about MYSQL. Does MYSQL also have patches just like MSSQL? I need to do MySQL version upgrade from 5.6.12 to 5.6.21 but i do not have much information at hand. A step by step guide will be the best but any information will be helpful. Thanks a lot everyone.





Report Server Subscription Configuration ISSUE


I Have Created A Subscription in our Development Server With below Steps.


1)Opened Report Configuration Manager and Configure Email Settings. 2) Opened Report Manager and configured a Report Data source with Store credentials, which is my domain login account and able to use same to connection to database engine. 3) Went to Subscription tab in that report and configured Subscription with schedule.


Error on "Status" bar of Subscription tab : Failure sending mail: The report server has encountered a configuration error. Mail will not be resent.


Checked log


1st time shot - Throwing Microsoft.ReportingServices.Diagnostics.Utilities.OperationNotSupportedNativeModeException: , Microsoft.ReportingServices.Diagnostics.Utilities.OperationNotSupportedNativeModeException: This operation is not supported on a report server that runs in native mode.;


2nd time Shot - Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: AuthzInitializeContextFromSid: Win32 error: 5; possible reason - service account doesn't have rights to check domain user SIDs., Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. ;


Questions-


1) How check "reporting server window Service has Send AS permission on SMTP Server"? (Does it means to say that reporting server service account should has such permission?).


2) Any specific reason for first and second error?





off-site Dataguard replication status check [on hold]


Kindly assist with step by step approach for monitoring replication as it progress from our Primary to Secondary Data Center. A short script or command that can accomplish this task will be appreciated, please. OS is RHEL, and we are using Dataguard due to far distance





How to optimize search restricted by boolean column?


I want to optimize a query1 of the form



SELECT yid, xid FROM x WHERE is_principal;


Here, is_principal is a non-null boolean column of x.


If I run EXPLAIN QUERY PLAN on this query, the output I get is



0|0|0|SCAN TABLE x


I'd like to turn this SCAN into a SEARCH, so created the following index:



CREATE INDEX x_is_principal_idx ON x (is_principal);


...but it made no difference: the output of EXPLAIN QUERY PLAN remains unchanged.


Is there some other way to optimize this query?




1 FWIW, this query is in fact a sub-query of a much larger query that I'm trying to optimize.





Cannot create MySQL database with name '5e370227_db'


Why I am getting error when I try to create database with name '5e370227_db'?


I am getting the following error:



mysql> create database 5e370227_db;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '5e370227_db' at line 1


When I try to create table with name '5a370227_db' (e -> a) the table is created successfuly:



mysql> create database 5a370227_db;
Query OK, 1 row affected (0.00 sec)


Details:



  • MYSQL: mysql Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1

  • Linux Centos 2.6.32-358.el6.x86_64

  • logged as root

  • database admin





Will coloum size change when trying to use single unicode character in place of multiple ASCII characters?


I have a column in utf8_unicode_ci collation for text type in MySQL.


The content is in Unicode, I have used tags like {br}, {v}, {t}, {h} etc.., throught the database. I am now thinking of changing these to single characters from Unicode table like ⓑ, ⓥ, ⓣ, Ⓗ and so on and so forth.


Will this have any effect on the size of the table? (Increase or decrease) Should I be worried about any other side effect? Will this have any effect if type of the column was varchar?





Use join to aggregate data from various normalized tables


I have a database as follows:



tbl_clients (Primary Key: client_id, Foreign Keys:place_id,district_id,zone_id,country_id)
tbl_places (Primary Key: place_id)
tbl_districts (Primary key: district_id)
tbl_zones (Primary Key: zone_id)
tbl_country (Primary key: country_id)
tbl_branches (primary key: branch_id, Foreign Key: client_id)


My client asked me to provide all the data in a csv file. The requirement is:


If a client has branches, the branches details also should show.(from tbl_branch) If a client does not have branches,only its details should show.


I have achieved the second condition using the following:


select distinct



a.name as Client_Name,
a.pobox_no as Postal_Address,
a.street as Street_Address,
d.place_name as Place,
e.district_name as District,
f.zone_name as Zone,
g.country_name as Country,
a.fax as Fax,
a.telephone as Telephone,
a.telephone2 as Telephone2,
a.email as Email,
a.website as website,
a.published as Published


from db_name.tbl_client a


join



db_name.tbl_subcategory b on a.sub_cat_id = b.cat_id


join



db_name.tbl_category c on a.category_id = c.id


join



db_name.tbl_place d on a.place_id = d.place_id


join



db_name.tbl_districts e on a.district_id = e.district_id


join



db_name.tbl_zones f on a.zone_id = f.zone_id


join



db_name.tbl_country g on a.country_id = g.country_id


where a.published = 1


How can I achieve the first condition?





find tables present in a query


I have a table with 1 column (varchar(max)) that holds a list of distinct queries. Does anyone have a query that could go through that list and select all table names (not just the first one) from the queries?



create table temp1 (query varchar(max))

insert into temp1
values (' SELECT * FROM table1')
, (' SELECT col1, col23 FROM table2 a join table3 b on a.col1 = b.col1')




why my data is not being sharded?


I am using mongo 2.6.7 I deploy a sharding cluster follow the guide: http://ift.tt/1iPAR3b


My sharding cluster include: - two replica sets (one primary + one secondary for each replica set) - three config servers - two mongos


I make sure sh.getBalancerState() is true. I prepared a amount of data about 1G. When I recheck by sh.status()



--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("54c2644bf2759e71913a3b59")
}
shards:
{ "_id" : "test_repl_set", "host" : "test_repl_set/mongo1.example.com:27017,mongo2.example.com:27017" }
{ "_id" : "test_repl_set1", "host" : "test_repl_set1/mongo3.example.com:27017,mongo4.example.com:27017" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "linux", "partitioned" : true, "primary" : "test_repl_set" }
linux.components
shard key: { "_id" : 1 }
chunks:
test_repl_set 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : test_repl_set Timestamp(1, 0)
linux.distro
shard key: { "name" : 1 }
chunks:
test_repl_set 1
{ "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : test_repl_set Timestamp(1, 0)
{ "_id" : "test", "partitioned" : false, "primary" : "test_repl_set1" }
{ "_id" : "testdata", "partitioned" : false, "primary" : "test_repl_set1" }
{ "_id" : "testshard", "partitioned" : true, "primary" : "test_repl_set1" }
testshard.testdata
shard key: { "x" : 1 }
chunks:
test_repl_set1 1
{ "x" : { "$minKey" : 1 } } -->> { "x" : { "$maxKey" : 1 } } on : test_repl_set1 Timestamp(1, 0)


I only get data of testshard from test_repl_set and there is no data of testshard from test_repl_set1. I don't know why testshard is not being migrated and why mongos tell me there is only one chunk on test_repl_set1 ?





Single Transaction across multiple Sql Database


I have a 3 sql Databases.I have insert a same record in 3 databases at the same time. if transaction is not success in one Database ,i should rollback other 2 also...can you help me...





jeudi 29 janvier 2015

Facing SQL Error 3403


Recently I was working in SQL Server and I faced an error called SQL Error 3403 which was stating "Error 3403, Severity 22 During recovery initialization, page %ld was encountered. This page belongs to object %ld, not the log "


Can anyone help me get rid out of it by giving reliable solutions.





Query in mysql is not solving


I have two tables as pet details & owner details. in that many no. of columns like Name,pets_id,Owners_id & Name,Owners_id,Mobile No. respectively. the query is to display owner name,mobile no. & pet name





what would be the primary disadvantages of splitting my tables into table spaces and/or separate schemata


I would like to organize the MySQL tables of a complex application into some groups, to avoid looking at a one big bin of tables. One big bin is harder to administrate when e.g. I would like to reset certain groups of related tables, or if I'd later wish to move a group of related tables to a different MySQL server altogether, and it is also more error prone when developing new DDL code.


I can apply a sense of grouping with my own table name prefixing "convention", but thought about using table spaces or even separate schemata. What would be the primary disadvantages of using separate table spaces or schemata for applying a sense of grouping upon my tables?





Mysql join not working


I have database with two tables: logs & src_info



mysql> describe logs;
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| sessionid | int(8) | NO | | NULL | |
| date | date | NO | | NULL | |
| time | time(6) | NO | | NULL | |
| src | varchar(15) | NO | | NULL | |
| dst | varchar(15) | NO | | NULL | |
| dstport | int(6) | NO | | NULL | |
| proto | varchar(6) | NO | | NULL | |
| rcvdbyte | int(24) | NO | | NULL | |
| sentbyte | int(24) | NO | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
10 rows in set (0.00 sec)

AND:



mysql> describe src_ipinfo;
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| src | varchar(15) | NO | | NULL | |
| hostname | varchar(50) | NO | | NULL | |
| city | varchar(50) | NO | | NULL | |
| region | varchar(50) | NO | | NULL | |
| country | varchar(2) | NO | | NULL | |
| org | varchar(150) | NO | | NULL | |
+----------+--------------+------+-----+---------+----------------+
7 rows in set (0.00 sec)

I am trying to run queries extracting from both tables but when I do myslq just hangs and I get no error and no results: This are the queries that are causing problems:



mysql>SELECT logs.src, logs.dst, logs.dstport, src_ipinfo.country, COUNT(1) hits FROM logs, src_ipinfo WHERE logs.src = src_ipinfo.src GROUP BY logs.src, logs.dst, logs.dstport ORDER BY hits DESC;

mysql>SELECT * FROM logs a , src_ipinfo b WHERE a.src = b.src AND a.dstport= 60595 ORDER BY a.src, a.dst;

The ultimate goal is to search from logs where dstport = 'xxxxx' and the country != 'US' keeping count of how many times that source reached the same dst, dstport combination. Example output:



mysql> describe src_ipinfo;
+----------+--------------+---------+-------+---------+-------+
| src | dst | dstport | proto | country | hits |
+----------+--------------+---------+-------+---------+-------+
| 2.3.45.3 | 10.10.1.23 | 60531 | TCP | CN | 3452 |
| 1.2.45.3 | 10.10.1.23 | 80801 | TCP | NL | 37 |
| 4.5.45.3 | 10.10.1.23 | 443 | TCP | IN | 2 |
+----------+--------------+---------+-------+---------+-------+

I already have individual queries for counting src, dst combo:



mysql>SELECT src, dst, dstport, proto, COUNT(src) hits FROM `logs` WHERE dstport = '60595' GROUP BY src,dst ORDER BY hits DESC;

Now I need to add to that query the country information from src_ipinfo table and further manipulate the result so that I only get non US sources.





Calculations involving thresholds between two tables


I have a table comment and a table price like below. The key columns in both tables are ticker_id, price_datetime and price_open. The last column threshold in table comment (which currently NULL) is something that I need to fill in with after some calculation queries (if this is achievable).


comment table:



+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| comment_id | comment_datetime | author | comment | ticker_id | price_datetime | price_open | threshold |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| 1 | 2014-09-22 06:05:00 | A1 | C1 | 343 | 2014-09-22 08:00:00 | 53.25000 | |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| 2 | 2014-09-22 06:39:00 | A2 | C2 | 1 | 2014-09-22 08:00:00 | 62.00000 | |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| 3 | 2014-09-22 08:13:00 | A3 | C3 | 178 | 2014-09-22 08:13:00 | 5.15000 | |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+


price table:



+----------+---------------------+------------+-----------+
| price_id | price_datetime | price_open | ticker_id |
+----------+---------------------+------------+-----------+
| 1 | 2014-09-22 08:01:00 | 62.00000 | 1 |
+----------+---------------------+------------+-----------+
| 2 | 2014-09-22 08:02:00 | 62.00000 | 1 |
+----------+---------------------+------------+-----------+
| 3 | 2014-09-22 08:03:00 | 62.00000 | 1 |
+----------+---------------------+------------+-----------+


In each row of table comment, price_open will be used as "base price".




  • For each row of table comment



    • Match the ticker_id, price_datetime and price_open with table price

    • Then, get the +-2 days for each price_datetime (follows with the price_open)

    • Then, count whether any of the price.price_open within that 5 days exceeds 5%, 10% or 15% of the "base price"




  • Conditions:



    • If any of the price.price_open within that 5 days equals/exceeds 15% of the "base price", then fill in comment.threshold with "R"

    • If any of the price.price_open within that 5 days equals/exceeds 10% of the "base price" (but less than 15%), then fill in comment.threshold with "A"

    • If any of the price.price_open within that 5 days equals/exceeds 5% of the "base price" (but less than 10%), then fill in comment.threshold with "Y"

    • If any of the price.price_open within that 5 days is less 5%, then fill in comment.threshold with "C"

    • For empty values in columns comment.price_datetime and comment.price_open, we will leave it NULL as it is, thus NULL for comment.threshold as well.




Is the above going to be achievable in MySQL using JOIN? I am trying to learn about JOIN right now, unfortunately it seems way too complicated to me, I have no clue about the query that I should execute as I just started learning MySQL.


I've tried explaining my question in details, but if there's anything unclear, kindly let me know.


Any help would be much appreciated. Thank you.


EDIT (as requested by Verace):


CREATE statements:



CREATE TABLE `comment` (
`comment_id` int(11) NOT NULL AUTO_INCREMENT,
`comment_datetime` datetime NOT NULL,
`author` varchar(25) NOT NULL,
`title` varchar(250) NOT NULL,
`comment` text NOT NULL,
`ticker_id` int(11) NOT NULL,
`price_datetime` datetime DEFAULT NULL,
`price_open` decimal(12,5) DEFAULT NULL,
`threshold` varchar(10) DEFAULT NULL,
PRIMARY KEY (`comment_id`)
)

CREATE TABLE `price` (
`price_id` int(11) NOT NULL AUTO_INCREMENT,
`price_open` decimal(12,5) DEFAULT NULL,
`ticker_id` int(11) NOT NULL,
`price_datetime` datetime NOT NULL,
PRIMARY KEY (`price_id`),
UNIQUE KEY `datetime` (`price_datetime`,`ticker_id`)
)


Expected result:



+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| comment_id | comment_datetime | author | comment | ticker_id | price_datetime | price_open | threshold |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| 1 | 2014-09-22 06:05:00 | A1 | C1 | 343 | 2014-09-22 08:00:00 | 53.25000 | C |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| 2 | 2014-09-22 06:39:00 | A2 | C2 | 1 | 2014-09-22 08:00:00 | 62.00000 | Y |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+
| 3 | 2014-09-22 08:13:00 | A3 | C3 | 178 | 2014-09-22 08:13:00 | 5.15000 | R |
+------------+---------------------+--------+---------+-----------+---------------------+------------+-----------+